888 resultados para locality algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research evaluates pattern recognition techniques on a subclass of big data where the dimensionality of the input space (p) is much larger than the number of observations (n). Specifically, we evaluate massive gene expression microarray cancer data where the ratio κ is less than one. We explore the statistical and computational challenges inherent in these high dimensional low sample size (HDLSS) problems and present statistical machine learning methods used to tackle and circumvent these difficulties. Regularization and kernel algorithms were explored in this research using seven datasets where κ < 1. These techniques require special attention to tuning necessitating several extensions of cross-validation to be investigated to support better predictive performance. While no single algorithm was universally the best predictor, the regularization technique produced lower test errors in five of the seven datasets studied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A class of priority systems with non-zero switching times, referred as generalized priority systems, is considered. Analytical results regarding the distribution of busy periods, queue lengths and various auxiliary characteristics are presented. These results can be viewed as generalizations of the Kendall functional equation and the Pollaczek-Khintchin transform equation, respectively. Numerical algorithms for systems’ busy periods and traffic coefficients are developed. ACM Computing Classification System (1998): 60K25.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A „földrajzi munkamegosztás” vagy „elhelyezkedés probléma” arra a kérdésre utal, miért alakulnak ki egy-egy gazdasági tevékenységre szakosodott földrajzi egységek, regionális gazdasági rendszerek. A hagyományos közgazdasági megközelítések a területek racionálisan kalkulálható komparatív előnyeit, a nyersanyagok vagy a piac közelségét, infrastrukturális adottságokat, útfüggőséget stb. szokták hangsúlyozni. A tanulmány szerzője a társadalmi kapcsolatok jelentőségét emeli ki, azt sugallja, hogy a területi specializálódás az egymással kapcsolatban álló, hasonlóan specializálódott többi szereplő nyomására alakul ki. A hipotézist két külföldön végzett eset tanulmány tapasztalatai alapján járja körül. ______________________ The question of "regional economic systems", "geographical division of labour" or "location problem" has an important literature. Economic approaches emphasize the rationally calculated advantages of the specialized industrial areas: the benefit of the exploitation of discovered resources, more cooperative relations, etc. The paper stresses the role of social networks in the location problem: economically specialized areas formed because of the suggestions and tips of connected enterprises, cooperative partners. The hypothesis is based on the experiences of two case studies, made in a Peruvian rural area and a Mexican modern industrial area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work is directed towards optimizing the radiation pattern of smart antennas using genetic algorithms. The structure of the smart antennas based on Space Division Multiple Access (SDMA) is proposed. It is composed of adaptive antennas, each of which has adjustable weight elements for amplitudes and phases of signals. The corresponding radiation pattern formula available for the utilization of numerical optimization techniques is deduced. Genetic algorithms are applied to search the best phase-amplitude weights or phase-only weights with which the optimal radiation pattern can be achieved. ^ One highlight of this work is the proposed optimal radiation pattern concept and its implementation by genetic algorithms. The results show that genetic algorithms are effective for the true Signal-Interference-Ratio (SIR) design of smart antennas. This means that not only nulls can be put in the directions of the interfering signals but also simultaneously main lobes can be formed in the directions of the desired signals. The optimal radiation pattern of a smart antenna possessing SDMA ability has been achieved. ^ The second highlight is on the weight search by genetic algorithms for the optimal radiation pattern design of antennas having more than one interfering signal. The regular criterion for determining which chromosome should be kept for the next step iteration is modified so as to improve the performance of the genetic algorithm iteration. The results show that the modified criterion can speed up and guarantee the iteration to be convergent. ^ In addition, the comparison between phase-amplitude perturbations and phase-only perturbations for the radiation pattern design of smart antennas are carried out. The effects of parameters used by the genetic algorithm on the optimal radiation pattern design are investigated. Valuable results are obtained. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optimization of adaptive traffic signal timing is one of the most complex problems in traffic control systems. This dissertation presents a new method that applies the parallel genetic algorithm (PGA) to optimize adaptive traffic signal control in the presence of transit signal priority (TSP). The method can optimize the phase plan, cycle length, and green splits at isolated intersections with consideration for the performance of both the transit and the general vehicles. Unlike the simple genetic algorithm (GA), PGA can provide better and faster solutions needed for real-time optimization of adaptive traffic signal control. ^ An important component in the proposed method involves the development of a microscopic delay estimation model that was designed specifically to optimize adaptive traffic signal with TSP. Macroscopic delay models such as the Highway Capacity Manual (HCM) delay model are unable to accurately consider the effect of phase combination and phase sequence in delay calculations. In addition, because the number of phases and the phase sequence of adaptive traffic signal may vary from cycle to cycle, the phase splits cannot be optimized when the phase sequence is also a decision variable. A "flex-phase" concept was introduced in the proposed microscopic delay estimation model to overcome these limitations. ^ The performance of PGA was first evaluated against the simple GA. The results show that PGA achieved both faster convergence and lower delay for both under- or over-saturated traffic conditions. A VISSIM simulation testbed was then developed to evaluate the performance of the proposed PGA-based adaptive traffic signal control with TSP. The simulation results show that the PGA-based optimizer for adaptive TSP outperformed the fully actuated NEMA control in all test cases. The results also show that the PGA-based optimizer was able to produce TSP timing plans that benefit the transit vehicles while minimizing the impact of TSP on the general vehicles. The VISSIM testbed developed in this research provides a powerful tool to design and evaluate different TSP strategies under both actuated and adaptive signal control. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research pursued the conceptualization and real-time verification of a system that allows a computer user to control the cursor of a computer interface without using his/her hands. The target user groups for this system are individuals who are unable to use their hands due to spinal dysfunction or other afflictions, and individuals who must use their hands for higher priority tasks while still requiring interaction with a computer. ^ The system receives two forms of input from the user: Electromyogram (EMG) signals from muscles in the face and point-of-gaze coordinates produced by an Eye Gaze Tracking (EGT) system. In order to produce reliable cursor control from the two forms of user input, the development of this EMG/EGT system addressed three key requirements: an algorithm was created to accurately translate EMG signals due to facial movements into cursor actions, a separate algorithm was created that recognized an eye gaze fixation and provided an estimate of the associated eye gaze position, and an information fusion protocol was devised to efficiently integrate the outputs of these algorithms. ^ Experiments were conducted to compare the performance of EMG/EGT cursor control to EGT-only control and mouse control. These experiments took the form of two different types of point-and-click trials. The data produced by these experiments were evaluated using statistical analysis, Fitts' Law analysis and target re-entry (TRE) analysis. ^ The experimental results revealed that though EMG/EGT control was slower than EGT-only and mouse control, it provided effective hands-free control of the cursor without a spatial accuracy limitation, and it also facilitated a reliable click operation. This combination of qualities is not possessed by either EGT-only or mouse control, making EMG/EGT cursor control a unique and practical alternative for a user's cursor control needs. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The profitability of momentum portfolios in the equity markets is derived from the continuation of stock returns over medium time horizons. The empirical evidence of momentum, however, is significantly different across markets around the world. The purpose of this dissertation is to: (1) help global investors determine the optimal selection and holding periods for momentum portfolios, (2) evaluate the profitability of the optimized momentum portfolios in different time periods and market states, (3) assess the investment strategy profits after considering transaction costs, and (4) interpret momentum returns within the framework of prior studies on investors’ behavior. Improving on the traditional practice of selecting arbitrary selection and holding periods, a genetic algorithm (GA) is employed. The GA performs a thorough and structured search to capture the return continuations and reversals patterns of momentum portfolios. Three portfolio formation methods are used: price momentum, earnings momentum, and earnings and price momentum and a non-linear optimization procedure (GA). The focus is on common equity of the U.S. and a select number of countries, including Australia, France, Germany, Japan, the Netherlands, Sweden, Switzerland and the United Kingdom. The findings suggest that the evolutionary algorithm increases the annualized profits of the U.S. momentum portfolios. However, the difference in mean returns is statistically significant only in certain cases. In addition, after considering transaction costs, both price and earnings and price momentum portfolios do not appear to generate abnormal returns. Positive risk-adjusted returns net of trading costs are documented solely during “up” markets for a portfolio long in prior winners only. The results on the international momentum effects indicate that the GA improves the momentum returns by 2 to 5% on an annual basis. In addition, the relation between momentum returns and exchange rate appreciation/depreciation is examined. The currency appreciation does not appear to influence significantly momentum profits. Further, the influence of the market state on momentum returns is not uniform across the countries considered. The implications of the above findings are discussed with a focus on the practical aspects of momentum investing, both in the U.S. and globally.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research is motivated by a practical application observed at a printed circuit board (PCB) manufacturing facility. After assembly, the PCBs (or jobs) are tested in environmental stress screening (ESS) chambers (or batch processing machines) to detect early failures. Several PCBs can be simultaneously tested as long as the total size of all the PCBs in the batch does not violate the chamber capacity. PCBs from different production lines arrive dynamically to a queue in front of a set of identical ESS chambers, where they are grouped into batches for testing. Each line delivers PCBs that vary in size and require different testing (or processing) times. Once a batch is formed, its processing time is the longest processing time among the PCBs in the batch, and its ready time is given by the PCB arriving last to the batch. ESS chambers are expensive and a bottleneck. Consequently, its makespan has to be minimized. ^ A mixed-integer formulation is proposed for the problem under study and compared to a formulation recently published. The proposed formulation is better in terms of the number of decision variables, linear constraints and run time. A procedure to compute the lower bound is proposed. For sparse problems (i.e. when job ready times are dispersed widely), the lower bounds are close to optimum. ^ The problem under study is NP-hard. Consequently, five heuristics, two metaheuristics (i.e. simulated annealing (SA) and greedy randomized adaptive search procedure (GRASP)), and a decomposition approach (i.e. column generation) are proposed—especially to solve problem instances which require prohibitively long run times when a commercial solver is used. Extensive experimental study was conducted to evaluate the different solution approaches based on the solution quality and run time. ^ The decomposition approach improved the lower bounds (or linear relaxation solution) of the mixed-integer formulation. At least one of the proposed heuristic outperforms the Modified Delay heuristic from the literature. For sparse problems, almost all the heuristics report a solution close to optimum. GRASP outperforms SA at a higher computational cost. The proposed approaches are viable to implement as the run time is very short. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Buffered crossbar switches have recently attracted considerable attention as the next generation of high speed interconnects. They are a special type of crossbar switches with an exclusive buffer at each crosspoint of the crossbar. They demonstrate unique advantages over traditional unbuffered crossbar switches, such as high throughput, low latency, and asynchronous packet scheduling. However, since crosspoint buffers are expensive on-chip memories, it is desired that each crosspoint has only a small buffer. This dissertation proposes a series of practical algorithms and techniques for efficient packet scheduling for buffered crossbar switches. To reduce the hardware cost of such switches and make them scalable, we considered partially buffered crossbars, whose crosspoint buffers can be of an arbitrarily small size. Firstly, we introduced a hybrid scheme called Packet-mode Asynchronous Scheduling Algorithm (PASA) to schedule best effort traffic. PASA combines the features of both distributed and centralized scheduling algorithms and can directly handle variable length packets without Segmentation And Reassembly (SAR). We showed by theoretical analysis that it achieves 100% throughput for any admissible traffic in a crossbar with a speedup of two. Moreover, outputs in PASA have a large probability to avoid the more time-consuming centralized scheduling process, and thus make fast scheduling decisions. Secondly, we proposed the Fair Asynchronous Segment Scheduling (FASS) algorithm to handle guaranteed performance traffic with explicit flow rates. FASS reduces the crosspoint buffer size by dividing packets into shorter segments before transmission. It also provides tight constant performance guarantees by emulating the ideal Generalized Processor Sharing (GPS) model. Furthermore, FASS requires no speedup for the crossbar, lowering the hardware cost and improving the switch capacity. Thirdly, we presented a bandwidth allocation scheme called Queue Length Proportional (QLP) to apply FASS to best effort traffic. QLP dynamically obtains a feasible bandwidth allocation matrix based on the queue length information, and thus assists the crossbar switch to be more work-conserving. The feasibility and stability of QLP were proved, no matter whether the traffic distribution is uniform or non-uniform. Hence, based on bandwidth allocation of QLP, FASS can also achieve 100% throughput for best effort traffic in a crossbar without speedup.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As massive data sets become increasingly available, people are facing the problem of how to effectively process and understand these data. Traditional sequential computing models are giving way to parallel and distributed computing models, such as MapReduce, both due to the large size of the data sets and their high dimensionality. This dissertation, as in the same direction of other researches that are based on MapReduce, tries to develop effective techniques and applications using MapReduce that can help people solve large-scale problems. Three different problems are tackled in the dissertation. The first one deals with processing terabytes of raster data in a spatial data management system. Aerial imagery files are broken into tiles to enable data parallel computation. The second and third problems deal with dimension reduction techniques that can be used to handle data sets of high dimensionality. Three variants of the nonnegative matrix factorization technique are scaled up to factorize matrices of dimensions in the order of millions in MapReduce based on different matrix multiplication implementations. Two algorithms, which compute CANDECOMP/PARAFAC and Tucker tensor decompositions respectively, are parallelized in MapReduce based on carefully partitioning the data and arranging the computation to maximize data locality and parallelism.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent decades, the rapid development of optical spectroscopy for tissue diagnosis has been indicative of its high clinical value. The goal of this research is to prove the feasibility of using diffuse reflectance spectroscopy and fluorescence spectroscopy to assess myocardial infarction (MI) in vivo. The proposed optical technique was designed to be an intra-operative guidance tool that can provide useful information about the condition of an infarct for surgeons and researchers. ^ In order to gain insight into the pathophysiological characteristics of an infarct, two novel spectral analysis algorithms were developed to interpret diffuse reflectance spectra. The algorithms were developed based on the unique absorption properties of hemoglobin for the purpose of retrieving regional hemoglobin oxygenation saturation and concentration data in tissue from diffuse reflectance spectra. The algorithms were evaluated and validated using simulated data and actual experimental data. ^ Finally, the hypothesis of the study was validated using a rabbit model of MI. The mechanism by which the MI was induced was the ligation of a major coronary artery of the left ventricle. Three to four weeks after the MI was induced, the extent of myocardial tissue injury and the evolution of the wound healing process were investigated using the proposed spectroscopic methodology as well as histology. The correlations between spectral alterations and histopathological features of the MI were analyzed statistically. ^ The results of this PhD study demonstrate the applicability of the proposed optical methodology for assessing myocardial tissue damage induced by MI in vivo. The results of the spectral analysis suggest that connective tissue proliferation induced by MI significantly alter the characteristics of diffuse reflectance and fluorescence spectra. The magnitudes of the alterations could be quantitatively related to the severity and extensiveness of connective tissue proliferation.^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tumor functional volume (FV) and its mean activity concentration (mAC) are the quantities derived from positron emission tomography (PET). These quantities are used for estimating radiation dose for a therapy, evaluating the progression of a disease and also use it as a prognostic indicator for predicting outcome. PET images have low resolution, high noise and affected by partial volume effect (PVE). Manually segmenting each tumor is very cumbersome and very hard to reproduce. To solve the above problem I developed an algorithm, called iterative deconvolution thresholding segmentation (IDTS) algorithm; the algorithm segment the tumor, measures the FV, correct for the PVE and calculates mAC. The algorithm corrects for the PVE without the need to estimate camera's point spread function (PSF); also does not require optimizing for a specific camera. My algorithm was tested in physical phantom studies, where hollow spheres (0.5-16 ml) were used to represent tumors with a homogeneous activity distribution. It was also tested on irregular shaped tumors with a heterogeneous activity profile which were acquired using physical and simulated phantom. The physical phantom studies were performed with different signal to background ratios (SBR) and with different acquisition times (1-5 min). The algorithm was applied on ten clinical data where the results were compared with manual segmentation and fixed percentage thresholding method called T50 and T60 in which 50% and 60% of the maximum intensity respectively is used as threshold. The average error in FV and mAC calculation was 30% and -35% for 0.5 ml tumor. The average error FV and mAC calculation were ~5% for 16 ml tumor. The overall FV error was ∼10% for heterogeneous tumors in physical and simulated phantom data. The FV and mAC error for clinical image compared to manual segmentation was around -17% and 15% respectively. In summary my algorithm has potential to be applied on data acquired from different cameras as its not dependent on knowing the camera's PSF. The algorithm can also improve dose estimation and treatment planning.^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Large read-only or read-write transactions with a large read set and a small write set constitute an important class of transactions used in such applications as data mining, data warehousing, statistical applications, and report generators. Such transactions are best supported with optimistic concurrency, because locking of large amounts of data for extended periods of time is not an acceptable solution. The abort rate in regular optimistic concurrency algorithms increases exponentially with the size of the transaction. The algorithm proposed in this dissertation solves this problem by using a new transaction scheduling technique that allows a large transaction to commit safely with significantly greater probability that can exceed several orders of magnitude versus regular optimistic concurrency algorithms. A performance simulation study and a formal proof of serializability and external consistency of the proposed algorithm are also presented.^ This dissertation also proposes a new query optimization technique (lazy queries). Lazy Queries is an adaptive query execution scheme which optimizes itself as the query runs. Lazy queries can be used to find an intersection of sub-queries in a very efficient way, which does not require full execution of large sub-queries nor does it require any statistical knowledge about the data.^ An efficient optimistic concurrency control algorithm used in a massively parallel B-tree with variable-length keys is introduced. B-trees with variable-length keys can be effectively used in a variety of database types. In particular, we show how such a B-tree was used in our implementation of a semantic object-oriented DBMS. The concurrency control algorithm uses semantically safe optimistic virtual "locks" that achieve very fine granularity in conflict detection. This algorithm ensures serializability and external consistency by using logical clocks and backward validation of transactional queries. A formal proof of correctness of the proposed algorithm is also presented. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Personalized recommender systems aim to assist users in retrieving and accessing interesting items by automatically acquiring user preferences from the historical data and matching items with the preferences. In the last decade, recommendation services have gained great attention due to the problem of information overload. However, despite recent advances of personalization techniques, several critical issues in modern recommender systems have not been well studied. These issues include: (1) understanding the accessing patterns of users (i.e., how to effectively model users' accessing behaviors); (2) understanding the relations between users and other objects (i.e., how to comprehensively assess the complex correlations between users and entities in recommender systems); and (3) understanding the interest change of users (i.e., how to adaptively capture users' preference drift over time). To meet the needs of users in modern recommender systems, it is imperative to provide solutions to address the aforementioned issues and apply the solutions to real-world applications. ^ The major goal of this dissertation is to provide integrated recommendation approaches to tackle the challenges of the current generation of recommender systems. In particular, three user-oriented aspects of recommendation techniques were studied, including understanding accessing patterns, understanding complex relations and understanding temporal dynamics. To this end, we made three research contributions. First, we presented various personalized user profiling algorithms to capture click behaviors of users from both coarse- and fine-grained granularities; second, we proposed graph-based recommendation models to describe the complex correlations in a recommender system; third, we studied temporal recommendation approaches in order to capture the preference changes of users, by considering both long-term and short-term user profiles. In addition, a versatile recommendation framework was proposed, in which the proposed recommendation techniques were seamlessly integrated. Different evaluation criteria were implemented in this framework for evaluating recommendation techniques in real-world recommendation applications. ^ In summary, the frequent changes of user interests and item repository lead to a series of user-centric challenges that are not well addressed in the current generation of recommender systems. My work proposed reasonable solutions to these challenges and provided insights on how to address these challenges using a simple yet effective recommendation framework.^