821 resultados para Algorithms complexity


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Climate change affects the rate of insect invasions as well as the abundance, distribution and impacts of such invasions on a global scale. Among the principal analytical approaches to predicting and understanding future impacts of biological invasions are Species Distribution Models (SDMs), typically in the form of correlative Ecological Niche Models (ENMs). An underlying assumption of ENMs is that species-environment relationships remain preserved during extrapolations in space and time, although this is widely criticised. The semi-mechanistic modelling platform, CLIMEX, employs a top-down approach using species ecophysiological traits and is able to avoid some of the issues of extrapolation, making it highly applicable to investigating biological invasions in the context of climate change. The tephritid fruit flies (Diptera: Tephritidae) comprise some of the most successful invasive species and serious economic pests around the world. Here we project 12 tephritid species CLIMEX models into future climate scenarios to examine overall patterns of climate suitability and forecast potential distributional changes for this group. We further compare the aggregate response of the group against species-specific responses. We then consider additional drivers of biological invasions to examine how invasion potential is influenced by climate, fruit production and trade indices. Considering the group of tephritid species examined here, climate change is predicted to decrease global climate suitability and to shift the cumulative distribution poleward. However, when examining species-level patterns, the predominant directionality of range shifts for 11 of the 12 species is eastward. Most notably, management will need to consider regional changes in fruit fly species invasion potential where high fruit production, trade indices and predicted distributions of these flies overlap.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report a Lattice-Boltzmann scheme that accounts for adsorption and desorption in the calculation of mesoscale dynamical properties of tracers in media of arbitrary complexity. Lattice Boltzmann simulations made it possible to solve numerically the coupled Navier-Stokes equations of fluid dynamics and Nernst-Planck equations of electrokinetics in complex, heterogeneous media. With the moment propagation scheme, it became possible to extract the effective diffusion and dispersion coefficients of tracers, or solutes, of any charge, e.g., in porous media. Nevertheless, the dynamical properties of tracers depend on the tracer-surface affinity, which is not purely electrostatic and also includes a species-specific contribution. In order to capture this important feature, we introduce specific adsorption and desorption processes in a lattice Boltzmann scheme through a modified moment propagation algorithm, in which tracers may adsorb and desorb from surfaces through kinetic reaction rates. The method is validated on exact results for pure diffusion and diffusion-advection in Poiseuille flows in a simple geometry. We finally illustrate the importance of taking such processes into account in the time-dependent diffusion coefficient in a more complex porous medium.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Network virtualisation is considerably gaining attentionas a solution to ossification of the Internet. However, thesuccess of network virtualisation will depend in part on how efficientlythe virtual networks utilise substrate network resources.In this paper, we propose a machine learning-based approachto virtual network resource management. We propose to modelthe substrate network as a decentralised system and introducea learning algorithm in each substrate node and substrate link,providing self-organization capabilities. We propose a multiagentlearning algorithm that carries out the substrate network resourcemanagement in a coordinated and decentralised way. The taskof these agents is to use evaluative feedback to learn an optimalpolicy so as to dynamically allocate network resources to virtualnodes and links. The agents ensure that while the virtual networkshave the resources they need at any given time, only the requiredresources are reserved for this purpose. Simulations show thatour dynamic approach significantly improves the virtual networkacceptance ratio and the maximum number of accepted virtualnetwork requests at any time while ensuring that virtual networkquality of service requirements such as packet drop rate andvirtual link delay are not affected.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the literature on housing market areas, different approaches can be found to defining them, for example, using travel-to-work areas and, more recently, making use of migration data. Here we propose a simple exercise to shed light on which approach performs better. Using regional data from Catalonia, Spain, we have computed housing market areas with both commuting data and migration data. In order to decide which procedure shows superior performance, we have looked at uniformity of prices within areas. The main finding is that commuting algorithms present more homogeneous areas in terms of housing prices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We adapt the Shout and Act algorithm to Digital Objects Preservation where agents explore file systems looking for digital objects to be preserved (victims). When they find something they “shout” so that agent mates can hear it. The louder the shout, the urgent or most important the finding is. Louder shouts can also refer to closeness. We perform several experiments to show that this system works very scalably, showing that heterogeneous teams of agents outperform homogeneous ones over a wide range of tasks complexity. The target at-risk documents are MS Office documents (including an RTF file) with Excel content or in Excel format. Thus, an interesting conclusion from the experiments is that fewer heterogeneous (varying skills) agents can equal the performance of many homogeneous (combined super-skilled) agents, implying significant performance increases with lower overall cost growth. Our results impact the design of Digital Objects Preservation teams: a properly designed combination of heterogeneous teams is cheaper and more scalable when confronted with uncertain maps of digital objects that need to be preserved. A cost pyramid is proposed for engineers to use for modeling the most effective agent combinations

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As a result of the growing interest in studying employee well-being as a complex process that portrays high levels of within-individual variability and evolves over time, this present study considers the experience of flow in the workplace from a nonlinear dynamical systems approach. Our goal is to offer new ways to move the study of employee well-being beyond linear approaches. With nonlinear dynamical systems theory as the backdrop, we conducted a longitudinal study using the experience sampling method and qualitative semi-structured interviews for data collection; 6981 registers of data were collected from a sample of 60 employees. The obtained time series were analyzed using various techniques derived from the nonlinear dynamical systems theory (i.e., recurrence analysis and surrogate data) and multiple correspondence analyses. The results revealed the following: 1) flow in the workplace presents a high degree of within-individual variability; this variability is characterized as chaotic for most of the cases (75%); 2) high levels of flow are associated with chaos; and 3) different dimensions of the flow experience (e.g., merging of action and awareness) as well as individual (e.g., age) and job characteristics (e.g., job tenure) are associated with the emergence of different dynamic patterns (chaotic, linear and random).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel unsymmetric dinucleating ligand (LN3N4) combining a tridentate and a tetradentate binding sites linked through a m-xylyl spacer was synthesized as ligand scaffold for preparing homo- and dimetallic complexes, where the two metal ions are bound in two different coordination environments. Site-selective binding of different metal ions is demonstrated. LN3N4 is able to discriminate between CuI and a complementary metal (M′ = CuI, ZnII, FeII, CuII, or GaIII) so that pure heterodimetallic complexes with a general formula [CuIM′(LN3N4)]n+ are synthesized. Reaction of the dicopper(I) complex [CuI 2(LN3N4)]2+ with O2 leads to the formation of two different copper-dioxygen (Cu2O2) intermolecular species (O and TP) between two copper atoms located in the same site from different complex molecules. Taking advantage of this feature, reaction of the heterodimetallic complexes [CuM′(LN3N4)]n+ with O2 at low temperature is used as a tool to determine the final position of the CuI center in the system because only one of the two Cu2O2 species is formed

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tot seguit presentem un entorn per analitzar senyals de tot tipus amb LDB (Local Discriminant Bases) i MLDB (Modified Local Discriminant Bases). Aquest entorn utilitza funcions desenvolupades en el marc d’una tesi en fase de desenvolupament. Per entendre part d’aquestes funcions es requereix un nivell de coneixement avançat de processament de senyals. S’han extret dels treballs realitzats per Naoki Saito [3], que s’han agafat com a punt de partida per la realització de l’algorisme de la tesi doctoral no finalitzada de Jose Antonio Soria. Aquesta interfície desenvolupada accepta la incorporació de nous paquets i funcions. Hem deixat un menú preparat per integrar Sinus IV packet transform i Cosine IV packet transform, tot i que també podem incorporar-n’hi altres. L’aplicació consta de dues interfícies, un Assistent i una interfície principal. També hem creat una finestra per importar i exportar les variables desitjades a diferents entorns. Per fer aquesta aplicació s’han programat tots els elements de les finestres, en lloc d’utilitzar el GUIDE (Graphical User Interface Development Enviroment) de MATLAB, per tal que sigui compatible entre les diferents versions d’aquest programa. En total hem fet 73 funcions en la interfície principal (d’aquestes, 10 pertanyen a la finestra d’importar i exportar) i 23 en la de l’Assistent. En aquest treball només explicarem 6 funcions i les 3 de creació d’aquestes interfícies per no fer-lo excessivament extens. Les funcions que explicarem són les més importants, ja sigui perquè s’utilitzen sovint, perquè, segons la complexitat McCabe, són les més complicades o perquè són necessàries pel processament del senyal. Passem cada entrada de dades per part de l’usuari per funcions que ens detectaran errors en aquesta entrada, com eliminació de zeros o de caràcters que no siguin números, com comprovar que són enters o que estan dins dels límits màxims i mínims que li pertoquen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Identification of order of an Autoregressive Moving Average Model (ARMA) by the usual graphical method is subjective. Hence, there is a need of developing a technique to identify the order without employing the graphical investigation of series autocorrelations. To avoid subjectivity, this thesis focuses on determining the order of the Autoregressive Moving Average Model using Reversible Jump Markov Chain Monte Carlo (RJMCMC). The RJMCMC selects the model from a set of the models suggested by better fitting, standard deviation errors and the frequency of accepted data. Together with deep analysis of the classical Box-Jenkins modeling methodology the integration with MCMC algorithms has been focused through parameter estimation and model fitting of ARMA models. This helps to verify how well the MCMC algorithms can treat the ARMA models, by comparing the results with graphical method. It has been seen that the MCMC produced better results than the classical time series approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A company’s competence to manage its product portfolio complexity is becoming critically important in the rapidly changing business environment. The continuous evolvement of customer needs, the competitive market environment and internal product development lead to increasing complexity in product portfolios. The companies that manage the complexity in product development are more profitable in the long run. The complexity derives from product development and management processes where the new product variant development is not managed efficiently. Complexity is managed with modularization which is a method that divides the product structure into modules. In modularization, it is essential to take into account the trade-off between the perceived customer value and the module or component commonality across the products. Another goal is to enable the product configuration to be more flexible. The benefits are achieved through optimizing complexity in module offering and deriving the new product variants more flexibly and accurately. The developed modularization process includes the process steps for preparation, mapping the current situation, the creation of a modular strategy and implementing the strategy. Also the organization and support systems have to be adapted to follow-up targets and to execute modularization in practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies the use of heuristic algorithms in a number of combinatorial problems that occur in various resource constrained environments. Such problems occur, for example, in manufacturing, where a restricted number of resources (tools, machines, feeder slots) are needed to perform some operations. Many of these problems turn out to be computationally intractable, and heuristic algorithms are used to provide efficient, yet sub-optimal solutions. The main goal of the present study is to build upon existing methods to create new heuristics that provide improved solutions for some of these problems. All of these problems occur in practice, and one of the motivations of our study was the request for improvements from industrial sources. We approach three different resource constrained problems. The first is the tool switching and loading problem, and occurs especially in the assembly of printed circuit boards. This problem has to be solved when an efficient, yet small primary storage is used to access resources (tools) from a less efficient (but unlimited) secondary storage area. We study various forms of the problem and provide improved heuristics for its solution. Second, the nozzle assignment problem is concerned with selecting a suitable set of vacuum nozzles for the arms of a robotic assembly machine. It turns out that this is a specialized formulation of the MINMAX resource allocation formulation of the apportionment problem and it can be solved efficiently and optimally. We construct an exact algorithm specialized for the nozzle selection and provide a proof of its optimality. Third, the problem of feeder assignment and component tape construction occurs when electronic components are inserted and certain component types cause tape movement delays that can significantly impact the efficiency of printed circuit board assembly. Here, careful selection of component slots in the feeder improves the tape movement speed. We provide a formal proof that this problem is of the same complexity as the turnpike problem (a well studied geometric optimization problem), and provide a heuristic algorithm for this problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As technology geometries have shrunk to the deep submicron regime, the communication delay and power consumption of global interconnections in high performance Multi- Processor Systems-on-Chip (MPSoCs) are becoming a major bottleneck. The Network-on- Chip (NoC) architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication issues such as performance limitations of long interconnects and integration of large number of Processing Elements (PEs) on a chip. The choice of routing protocol and NoC structure can have a significant impact on performance and power consumption in on-chip networks. In addition, building a high performance, area and energy efficient on-chip network for multicore architectures requires a novel on-chip router allowing a larger network to be integrated on a single die with reduced power consumption. On top of that, network interfaces are employed to decouple computation resources from communication resources, to provide the synchronization between them, and to achieve backward compatibility with existing IP cores. Three adaptive routing algorithms are presented as a part of this thesis. The first presented routing protocol is a congestion-aware adaptive routing algorithm for 2D mesh NoCs which does not support multicast (one-to-many) traffic while the other two protocols are adaptive routing models supporting both unicast (one-to-one) and multicast traffic. A streamlined on-chip router architecture is also presented for avoiding congested areas in 2D mesh NoCs via employing efficient input and output selection. The output selection utilizes an adaptive routing algorithm based on the congestion condition of neighboring routers while the input selection allows packets to be serviced from each input port according to its congestion level. Moreover, in order to increase memory parallelism and bring compatibility with existing IP cores in network-based multiprocessor architectures, adaptive network interface architectures are presented to use multiple SDRAMs which can be accessed simultaneously. In addition, a smart memory controller is integrated in the adaptive network interface to improve the memory utilization and reduce both memory and network latencies. Three Dimensional Integrated Circuits (3D ICs) have been emerging as a viable candidate to achieve better performance and package density as compared to traditional 2D ICs. In addition, combining the benefits of 3D IC and NoC schemes provides a significant performance gain for 3D architectures. In recent years, inter-layer communication across multiple stacked layers (vertical channel) has attracted a lot of interest. In this thesis, a novel adaptive pipeline bus structure is proposed for inter-layer communication to improve the performance by reducing the delay and complexity of traditional bus arbitration. In addition, two mesh-based topologies for 3D architectures are also introduced to mitigate the inter-layer footprint and power dissipation on each layer with a small performance penalty.