DSpace Community: 2014
http://hdl.handle.net/10525/2424
The Community's search engineSearch the Channelsearch
http://sci-gems.math.bas.bg/jspui/simple-search
An Approach for a more Objective Evaluation of Practical Projects, Used in the Training Process
http://hdl.handle.net/10525/2480
Title: An Approach for a more Objective Evaluation of Practical Projects, Used in the Training Process<br/><br/>Authors: Delinov, Emil; Eskenazi, Avram<br/><br/>Abstract: Well–prepared, adaptive and sustainably developing specialistsare an important competitive advantage, but also one of the main challengesfor businesses. One option of the education system for creation and developmentof staff adequate to the needs is the development of pro jects withtopics from real economy ("Practical Projects").The objective assessment is an essential driver and motivator, and is based ona system of well-chosen, well-defined and specific criteria and indicators.An approach to a more objective evaluation of practical projects is finding moreobjective weights of the criteria.A natural and reasonable approach is the accumulation of opinions of provenexperts and subsequent bringing out the weights from the accumulated data.The preparation and conduction of a survey among recognized experts in the fieldof project-based learning in mathematics, informatics and information technologiesis described. The processing of the data accumulated by applying AHP, allowed usto objectively determine weights of evaluation criteria and hence to achievethe desired objectiveness.ACM Computing Classification System (1998): K.3.2.VisibleZ: A Mainframe Architecture Emulator for Computing Education
http://hdl.handle.net/10525/2479
Title: VisibleZ: A Mainframe Architecture Emulator for Computing Education<br/><br/>Authors: Woolbright, David; Zanev, Vladimir; Rogers, Neal<br/><br/>Abstract: This paper describes a PC-based mainframe computer emulatorcalled VisibleZ and its use in teaching mainframe Computer Organizationand Assembly Programming classes.VisibleZ models IBM’s z/Architectureand allows direct interpretation of mainframe assembly language objectcode in a graphical user interface environment that was developed in Java.The VisibleZ emulator acts as an interactive visualization tool to simulateenterprise computer architecture. The provided architectural componentsinclude main storage, CPU, registers, Program Status Word (PSW), andI/O Channels. Particular attention is given to providing visual clues tothe user by color-coding screen components, machine instruction execution,and animation of the machine architecture components.Students interact with VisibleZ by executing machine instructions in a step-by-stepmode, simultaneously observing the contents of memory, registers, and changes inthe PSW during the fetch-decode-execute machine instruction cycle. Theobject-oriented design and implementation of VisibleZ allows students todevelop their own instruction semantics by coding Java for existing specificz/Architecture machine instructions or design and implement new machineinstructions.The use of VisibleZ in lectures, labs, and assignments is describedin the paper and supported by a website that hosts an extensivecollection of related materials. VisibleZ has been proven a useful tool inmainframe Assembly Language Programming and Computer Organizationclasses. Using VisibleZ, students develop a better understanding ofmainframe concepts, components, and how the mainframe computer works.ACM Computing Classification System (1998): C.0, K.3.2.An Adaptation of the Hoshen-Kopelman Cluster Counting Algorithm for Honeycomb Networks
http://hdl.handle.net/10525/2478
Title: An Adaptation of the Hoshen-Kopelman Cluster Counting Algorithm for Honeycomb Networks<br/><br/>Authors: Popova, Hristina<br/><br/>Abstract: We develop a simplified implementation of the Hoshen-Kopelman cluster countingalgorithm adapted for honeycomb networks.In our implementation of the algorithm we assume that all nodes in the networkare occupied and links between nodes can be intact or broken.The algorithm counts how many clusters there are in the network and determineswhich nodes belong to each cluster. The network information is stored intotwo sets of data. The first one is related to the connectivity of the nodes andthe second one to the state of links. The algorithm finds all clusters in onlyone scan across the network and thereafter cluster relabeling operates on avector whose size is much smaller than the size of the network. Countingthe number of clusters of each size, the algorithm determines the clustersize probability distribution from which the mean cluster size parameter canbe estimated. Although our implementation of the Hoshen-Kopelman algorithmworks only for networks with a honeycomb (hexagonal) structure, itcan be easily changed to be applied for networks with arbitrary connectivitybetween the nodes (triangular, square, etc.).The proposed adaptation of the Hoshen-Kopelman cluster counting algorithmis applied to studying the thermal degradation of a graphene-like honeycombmembrane by means of Molecular Dynamics simulation with a Langevin thermostat.ACM Computing Classification System (1998): F.2.2, I.5.3.Augmented Reality as a Method for Expanded Presentation of Objects of Digitized Heritage
http://hdl.handle.net/10525/2477
Title: Augmented Reality as a Method for Expanded Presentation of Objects of Digitized Heritage<br/><br/>Authors: Kolev, Alexander; Dimov, Dimo<br/><br/>Abstract: Augmented reality is the latest among information technologiesin modern electronics industry. The essence is in the addition of advancedcomputer graphics in real and/or digitized images. This paper gives a briefanalysis of the concept and the approaches to implementing augmentedreality for an expanded presentation of a digitized object of national culturaland/or scientific heritage.ACM Computing Classification System (1998): H.5.1, H.5.3, I.3.7.Representing Equivalence Problems for Combinatorial Objects
http://hdl.handle.net/10525/2476
Title: Representing Equivalence Problems for Combinatorial Objects<br/><br/>Authors: Bouyukliev, Iliya; Dzhumalieva-Stoeva, Mariya<br/><br/>Abstract: Methods for representing equivalence problems of various combinatorial objectsas graphs or binary matrices are considered. Such representations can be usedfor isomorphism testing in classification or generation algorithms.Often it is easier to consider a graph or a binary matrix isomorphism problemthan to implement heavy algorithms depending especially on particular combinatorialobjects. Moreover, there already exist well tested algorithms for the graph isomorphismproblem (nauty) and the binary matrix isomorphism problem as well (Q-Extension).ACM Computing Classification System (1998): F.2.1, G.4.Modular Digital Watermarking for Image Verification and Secure Data Storage in Web Applications
http://hdl.handle.net/10525/2475
Title: Modular Digital Watermarking for Image Verification and Secure Data Storage in Web Applications<br/><br/>Authors: Ilchev, Svetozar; Ilcheva, Zlatoliliya<br/><br/>Abstract: Our modular approach to data hiding is an innovative conceptin the data hiding research field. It enables the creation of modular digitalwatermarking methods that have extendable features and are designed foruse in web applications. The methods consist of two types of modules – abasic module and an application-specific module. The basic module mainlyprovides features which are connected with the specific image format.As JPEG is a preferred image format on the Internet, we have put a focuson the achievement of a robust and error-free embedding and retrieval ofthe embedded data in JPEG images. The application-specific modules areadaptable to user requirements in the concrete web application.The experimental results of the modular data watermarking are very promising.They indicate excellent image quality, satisfactory size of the embedded data andperfect robustness against JPEG transformations with prespecified compression ratios.ACM Computing Classification System (1998): C.2.0.On the Busy Period in One Finite Queue of M/G/1 Type with Inactive Orbit
http://hdl.handle.net/10525/2463
Title: On the Busy Period in One Finite Queue of M/G/1 Type with Inactive Orbit<br/><br/>Authors: Dragieva, Velika<br/><br/>Abstract: The paper deals with a single server finite queuing systemwhere the customers, who failed to get service, are temporarily blocked inthe orbit of inactive customers. This model and its variants have manyapplications, especially for optimization of the corresponding models withretrials. We analyze the system in non-stationary regime and, usingthe discrete transformations method study, the busy period length andthe number of successful calls made during it.ACM Computing Classification System (1998): G.3, J.7.Generalized Priority Systems. Analytical Results and Numerical Algorithms
http://hdl.handle.net/10525/2462
Title: Generalized Priority Systems. Analytical Results and Numerical Algorithms<br/><br/>Authors: Mishkoy, Gheorghe<br/><br/>Abstract: A class of priority systems with non-zero switching times, referred asgeneralized priority systems, is considered.Analytical results regarding the distribution of busy periods, queue lengthsand various auxiliary characteristics are presented. These results can beviewed as generalizations of the Kendall functional equation and thePollaczek-Khintchin transform equation, respectively. Numerical algorithmsfor systems’ busy periods and traffic coefficients are developed.ACM Computing Classification System (1998): 60K25.The Methodology of the Subsistence Minimum Calculation for Developing Countries and its Computation on the Georgian Example
http://hdl.handle.net/10525/2461
Title: The Methodology of the Subsistence Minimum Calculation for Developing Countries and its Computation on the Georgian Example<br/><br/>Authors: Makalatia, Irakli; Krialashvili, Ketevani; Gerliani, Ramazi<br/><br/>Abstract: This article shows the social importance of subsistence minimum in Georgia.The methodology of its calculation is also shown. We propose ways of improvingthe calculation of subsistence minimum in Georgia and how to extend it for otherdeveloping countries.The weights of food and non-food expenditures in the subsistence minimumbaskets are essential in these calculations. Daily consumption value of the minimumfood basket has been calculated too. The average consumer expenditures on foodsupply and the other expenditures to the share are considered in dynamics.Our methodology of the subsistence minimum calculation is applied for thecase of Georgia. However, it can be used for similar purposes based on datafrom other developing countries, where social stability is achieved, and socialinequalities are to be actualized.ACM Computing Classification System (1998): H.5.3, J.1, J.4, G.3.On the Lp-Norm Regression Models for Estimating Value-at-Risk
http://hdl.handle.net/10525/2460
Title: On the Lp-Norm Regression Models for Estimating Value-at-Risk<br/><br/>Authors: Kumar, Pranesh; Kashanchi, Faramarz<br/><br/>Abstract: Analysis of risk measures associated with price series datamovements and its predictions are of strategic importance in the financial marketsas well as to policy makers in particular for short- and longterm planning for setting upeconomic growth targets. For example, oilprice risk-management focuses primarily onwhen and how an organization can best prevent the costly exposure to price risk.Value-at-Risk (VaR) is the commonly practised instrument to measure riskand is evaluated by analysing the negative/positive tail of the probability distributions of thereturns (profit or loss). In modelling applications, least-squares estimation (LSE)-basedlinear regression models are often employed for modeling and analyzing correlated data.These linear models are optimal and perform relatively well under conditions such as errorsfollowing normal or approximately normal distributions, being free of large size outliers and satisfyingthe Gauss-Markov assumptions. However, often in practical situations, the LSE-based linear regressionmodels fail to provide optimal results, for instance, in non-Gaussian situations especially when the errorsfollow distributions with fat tails and error terms possess a finite variance.This is the situation in case of risk analysis which involves analyzing tail distributions.Thus, applications of the LSE-based regression models may be questioned for appropriatenessand may have limited applicability. We have carried out the risk analysis of Iranian crude oil price databased on the Lp-norm regression models and have noted that the LSE-based models do not alwaysperform the best. We discuss results from the L1, L2 and L∞-norm based linear regression models.ACM Computing Classification System (1998): B.1.2, F.1.3, F.2.3, G.3, J.2.Dependence Structure of some Bivariate Distributions
http://hdl.handle.net/10525/2459
Title: Dependence Structure of some Bivariate Distributions<br/><br/>Authors: Dimitrov, Boyan<br/><br/>Abstract: Dependence in the world of uncertainty is a complex concept.However, it exists, is asymmetric, has magnitude and direction, and can bemeasured. We use some measures of dependence between random events toillustrate how to apply it in the study of dependence between non-numericbivariate variables and numeric random variables. Graphics show what isthe inner dependence structure in the Clayton Archimedean copula and theBivariate Poisson distribution. We know this approach is valid for studyingthe local dependence structure for any pair of random variables determinedby its empirical or theoretical distribution. And it can be used also to simulatedependent events and dependent r/v/’s, but some restrictions apply.ACM Computing Classification System (1998): G.3, J.2.Teaching Statistics to Engineers: Learning from Experiential Data
http://hdl.handle.net/10525/2458
Title: Teaching Statistics to Engineers: Learning from Experiential Data<br/><br/>Authors: Mandrekar, Vidyadhar<br/><br/>Abstract: The purpose of the work is to claim that engineers can bemotivated to study statistical concepts by using the applications in their experienceconnected with Statistical ideas.The main idea is to choose a data from the manufacturing factility (for example,output from CMM machine) and explain that even if the parts used do not meetexact specifications they are used in production. By graphing the data one can showthat the error is random but follows a distribution, that is, there is regularily in the data instatistical sense. As the error distribution is continuous, we advocate that the concept ofrandomness be introducted starting with continuous random variables with probabilitiesconnected with areas under the density.The discrete random variables are then introduced in terms of decision connectedwith size of the errors before generalizing to abstract concept of probability.Using software, they can then be motivated to study statistical analysis of the datathey encounter and the use of this analysis to make engineering and management decisions.Five Turning Points in the Historical Progress of Statistics - My Personal Vision
http://hdl.handle.net/10525/2457
Title: Five Turning Points in the Historical Progress of Statistics - My Personal Vision<br/><br/>Authors: von Collani, Elart<br/><br/>Abstract: Statistics has penetrated almost all branches of science and allareas of human endeavor. At the same time, statistics is not onlymisunderstood, misused and abused to a frightening extent, but it is also oftenmuch disliked by students in colleges and universities.This lecture discusses/covers/addresses the historical development of statistics,aiming at identifying the most important turning points that led to the present stateof statistics and at answering the questions “What went wrong with statistics?”and “What to do next?”.ACM Computing Classification System (1998): A.0, A.m, G.3, K.3.2.A Comparative Analysis of Predictive Learning Algorithms on High-Dimensional Microarray Cancer Data
http://hdl.handle.net/10525/2437
Title: A Comparative Analysis of Predictive Learning Algorithms on High-Dimensional Microarray Cancer Data<br/><br/>Authors: Bill, Jo; Fokoue, Ernest<br/><br/>Abstract: This research evaluates pattern recognition techniques on a subclass of big datawhere the dimensionality of the input space (p) is much larger than the number ofobservations (n). Specifically, we evaluate massive gene expression microarray cancer datawhere the ratio κ is less than one. We explore the statistical and computational challengesinherent in these high dimensional low sample size (HDLSS) problems and presentstatistical machine learning methods used to tackle and circumvent these difficulties.Regularization and kernel algorithms were explored in this research using seven datasetswhere κ < 1. These techniques require special attention to tuning necessitatingseveral extensions of cross-validation to be investigated to support better predictiveperformance. While no single algorithm was universally the best predictor, the regularizationtechnique produced lower test errors in five of the seven datasets studied.Data Mining for Software Development Life Cycle Quality Management
http://hdl.handle.net/10525/2436
Title: Data Mining for Software Development Life Cycle Quality Management<br/><br/>Authors: Nedeltcheva, Galia<br/><br/>Abstract: Computer software plays an important role in business, government, society and sciences. To solve real-world problems, it is very importantto measure the quality and reliability in the software development life cycle(SDLC). Software Engineering (SE) is the computing field concerned withdesigning, developing, implementing, maintaining and modifying software.The present paper gives an overview of the Data Mining (DM) techniquesthat can be applied to various types of SE data in order to solve the challenges posed by SE tasks such as programming, bug detection, debuggingand maintenance. A specific DM software is discussed, namely one of theanalytical tools for analyzing data and summarizing the relationships thathave been identified. The paper concludes that the proposed techniques ofDM within the domain of SE could be well applied in fields such as CustomerRelationship Management (CRM), eCommerce and eGovernment. ACM Computing Classification System (1998): H.2.8.Accent Recognition for Noisy Audio Signals
http://hdl.handle.net/10525/2435
Title: Accent Recognition for Noisy Audio Signals<br/><br/>Authors: Ma, Zichen; Fokoue, Ernest<br/><br/>Abstract: It is well established that accent recognition can be as accurateas up to 95% when the signals are noise-free, using feature extraction techniques such as mel-frequency cepstral coefficients and binary classifiers suchas discriminant analysis, support vector machine and k-nearest neighbors. Inthis paper, we demonstrate that the predictive performance can be reducedby as much as 15% when the signals are noisy. Specifically, in this paper weperturb the signals with different levels of white noise, and as the noise become stronger, the out-of-sample predictive performance deteriorates from95% to 80%, although the in-sample prediction gives overly-optimistic results. ACM Computing Classification System (1998): C.3, C.5.1, H.1.2, H.2.4., G.3.