942 resultados para Parallel programming model
Resumo:
PURPOSE The decision-making process plays a key role in organizations. Every decision-making process produces a final choice that may or may not prompt action. Recurrently, decision makers find themselves in the dichotomous question of following a traditional sequence decision-making process where the output of a decision is used as the input of the next stage of the decision, or following a joint decision-making approach where several decisions are taken simultaneously. The implication of the decision-making process will impact different players of the organization. The choice of the decision- making approach becomes difficult to find, even with the current literature and practitioners’ knowledge. The pursuit of better ways for making decisions has been a common goal for academics and practitioners. Management scientists use different techniques and approaches to improve different types of decisions. The purpose of this decision is to use the available resources as well as possible (data and techniques) to achieve the objectives of the organization. The developing and applying of models and concepts may be helpful to solve managerial problems faced every day in different companies. As a result of this research different decision models are presented to contribute to the body of knowledge of management science. The first models are focused on the manufacturing industry and the second part of the models on the health care industry. Despite these models being case specific, they serve the purpose of exemplifying that different approaches to the problems and could provide interesting results. Unfortunately, there is no universal recipe that could be applied to all the problems. Furthermore, the same model could deliver good results with certain data and bad results for other data. A framework to analyse the data before selecting the model to be used is presented and tested in the models developed to exemplify the ideas. METHODOLOGY As the first step of the research a systematic literature review on the joint decision is presented, as are the different opinions and suggestions of different scholars. For the next stage of the thesis, the decision-making process of more than 50 companies was analysed in companies from different sectors in the production planning area at the Job Shop level. The data was obtained using surveys and face-to-face interviews. The following part of the research into the decision-making process was held in two application fields that are highly relevant for our society; manufacturing and health care. The first step was to study the interactions and develop a mathematical model for the replenishment of the car assembly where the problem of “Vehicle routing problem and Inventory” were combined. The next step was to add the scheduling or car production (car sequencing) decision and use some metaheuristics such as ant colony and genetic algorithms to measure if the behaviour is kept up with different case size problems. A similar approach is presented in a production of semiconductors and aviation parts, where a hoist has to change from one station to another to deal with the work, and a jobs schedule has to be done. However, for this problem simulation was used for experimentation. In parallel, the scheduling of operating rooms was studied. Surgeries were allocated to surgeons and the scheduling of operating rooms was analysed. The first part of the research was done in a Teaching hospital, and for the second part the interaction of uncertainty was added. Once the previous problem had been analysed a general framework to characterize the instance was built. In the final chapter a general conclusion is presented. FINDINGS AND PRACTICAL IMPLICATIONS The first part of the contributions is an update of the decision-making literature review. Also an analysis of the possible savings resulting from a change in the decision process is made. Then, the results of the survey, which present a lack of consistency between what the managers believe and the reality of the integration of their decisions. In the next stage of the thesis, a contribution to the body of knowledge of the operation research, with the joint solution of the replenishment, sequencing and inventory problem in the assembly line is made, together with a parallel work with the operating rooms scheduling where different solutions approaches are presented. In addition to the contribution of the solving methods, with the use of different techniques, the main contribution is the framework that is proposed to pre-evaluate the problem before thinking of the techniques to solve it. However, there is no straightforward answer as to whether it is better to have joint or sequential solutions. Following the proposed framework with the evaluation of factors such as the flexibility of the answer, the number of actors, and the tightness of the data, give us important hints as to the most suitable direction to take to tackle the problem. RESEARCH LIMITATIONS AND AVENUES FOR FUTURE RESEARCH In the first part of the work it was really complicated to calculate the possible savings of different projects, since in many papers these quantities are not reported or the impact is based on non-quantifiable benefits. The other issue is the confidentiality of many projects where the data cannot be presented. For the car assembly line problem more computational power would allow us to solve bigger instances. For the operation research problem there was a lack of historical data to perform a parallel analysis in the teaching hospital. In order to keep testing the decision framework it is necessary to keep applying more case studies in order to generalize the results and make them more evident and less ambiguous. The health care field offers great opportunities since despite the recent awareness of the need to improve the decision-making process there are many opportunities to improve. Another big difference with the automotive industry is that the last improvements are not spread among all the actors. Therefore, in the future this research will focus more on the collaboration between academia and the health care sector.
Resumo:
La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.
Resumo:
The activation of the silent endogenous progesterone receptor (PR) gene by 17-β-estradiol (E2) in cells stably transfected with estrogen receptor (ER) was used as a model system to study the mechanism of E2-induced transcription. The time course of E2-induced PR transcription rate was determined by nuclear run-on assays. No marked effect on specific PR gene transcription rates was detected at 0 and 1 h of E2 treatment. After 3 h of E2 treatment, the PR mRNA synthesis rate increased 2.0- ± 0.2-fold and continued to increase to 3.5- ± 0.4-fold by 24 h as compared with 0 h. The transcription rate increase was followed by PR mRNA accumulation. No PR mRNA was detectable at 0, 1, and 3 h of E2 treatment. PR mRNA accumulation was detected at 6 h of E2 treatment and continued to accumulate until 18 h, the longest time point examined. Interestingly, this slow and gradual transcription rate increase of the endogenous PR gene did not parallel binding of E2 to ER, which was maximized within 30 min. Furthermore, the E2–ER level was down-regulated to 15% at 3 h as compared with 30 min of E2 treatment and remained low at 24 h of E2 exposure. These paradoxical observations indicate that E2-induced transcription activation is more complicated than just an association of the occupied ER with the transcription machinery.
Resumo:
1-Methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) damages dopaminergic neurons in the substantia nigra pars compacta (SNpc) as seen in Parkinson's disease. Here, we show that the pro-apoptotic protein Bax is highly expressed in the SNpc and that its ablation attenuates SNpc developmental neuronal apoptosis. In adult mice, there is an up-regulation of Bax in the SNpc after MPTP administration and a decrease in Bcl-2. These changes parallel MPTP-induced dopaminergic neurodegeneration. We also show that mutant mice lacking Bax are significantly more resistant to MPTP than their wild-type littermates. This study demonstrates that Bax plays a critical role in the MPTP neurotoxic process and suggests that targeting Bax may provide protective benefit in the treatment of Parkinson's disease.
Resumo:
The energetics of a fusion pathway is considered, starting from the contact site where two apposed membranes each locally protrude (as “nipples”) toward each other. The equilibrium distance between the tips of the two nipples is determined by a balance of physical forces: repulsion caused by hydration and attraction generated by fusion proteins. The energy to create the initial stalk, caused by bending of cis monolayer leaflets, is much less when the stalk forms between nipples rather than parallel flat membranes. The stalk cannot, however, expand by bending deformations alone, because this would necessitate the creation of a hydrophobic void of prohibitively high energy. But small movements of the lipids out of the plane of their monolayers allow transformation of the stalk into a modified stalk. This intermediate, not previously considered, is a low-energy structure that can reconfigure into a fusion pore via an additional intermediate, the prepore. The lipids of this latter structure are oriented as in a fusion pore, but the bilayer is locally compressed. All membrane rearrangements occur in a discrete local region without creation of an extended hemifusion diaphragm. Importantly, all steps of the proposed pathway are energetically feasible.
Resumo:
The folding mechanism of a 125-bead heteropolymer model for proteins is investigated with Monte Carlo simulations on a cubic lattice. Sequences that do and do not fold in a reasonable time are compared. The overall folding behavior is found to be more complex than that of models for smaller proteins. Folding begins with a rapid collapse followed by a slow search through the semi-compact globule for a sequence-dependent stable core with about 30 out of 176 native contacts which serves as the transition state for folding to a near-native structure. Efficient search for the core is dependent on structural features of the native state. Sequences that fold have large amounts of stable, cooperative structure that is accessible through short-range initiation sites, such as those in anti-parallel sheets connected by turns. Before folding is completed, the system can encounter a second bottleneck, involving the condensation and rearrangement of surface residues. Overly stable local structure of the surface residues slows this stage of the folding process. The relation of the results from the 125-mer model studies to the folding of real proteins is discussed.
Resumo:
K+ channels, which have been linked to regulation of electrogenic solute transport as well as Ca2+ influx, represent a locus in hepatocytes for the concerted actions of hormones that employ Ca2+ and cAMP as intracellular messengers. Despite considerable study, the single-channel basis for synergistic effects of Ca2+ and cAMP on hepatocellular K+ conductance is not well understood. To address this question, patch-clamp recording techniques were applied to a model liver cell line, HTC hepatoma cells. Increasing the cytosolic Ca2+ concentration ([Ca2+]i) in HTC cells, either by activation of purinergic receptors with ATP or by inhibition of intracellular Ca2+ sequestration with thapsigargin, activated low-conductance (9-pS) K+ channels. Studies with excised membrane patches suggested that these channels were directly activated by Ca2+. Exposure of HTC cells to a permeant cAMP analog, 8-(4-chlorophenylthio)-cAMP, also activated 9-pS K+ channels but did not change [Ca2+]i. In excised membrane patches, cAMP-dependent protein kinase (the downstream effector of cAMP) activated K+ channels with conductance and selectivity identical to those of channels activated by Ca2+. In addition, cAMP-dependent protein kinase activated a distinct K+ channel type (5 pS). These data represent the differential regulation of low-conductance K+ channels by signaling pathways mediated by Ca2+ and cAMP. Moreover, since low-conductance Ca(2+)-activated K+ channels have been identified in a variety of cell types, these findings suggest that differential regulation of K+ channels by hormones with distinct signaling pathways may provide a mechanism for hormonal control of solute transport and Ca(2+)-dependent cellular functions in the liver as well as other nonexcitable tissues.
Resumo:
We have analyzed the pathway of folding of barnase bound to GroEL to resolve the controversy of whether proteins can fold while bound to chaperonins (GroEL or Cpn60) or fold only after their release into solution. Four phases in the folding were detected by rapid-reaction kinetic measurements of the intrinsic fluorescence of both wild type and barnase mutants. The phases were assigned from their rate laws, sensitivity to mutations, and correspondence to regain of catalytic activity. At high ratios of denatured barnase to GroEL, 4 mol of barnase rapidly bind per 14-mer of GroEL. At high ratios of GroEL to barnase, 1 mol of barnase binds with a rate constant of 3.5 x 10(7) s-1.M-1. This molecule then refolds with a low rate constant that changes on mutation in parallel with the rate constant for the folding in solution. This rate constant corresponds to the regain of the overall catalytic activity of barnase and increases 15-fold on the addition of ATP to a physiologically relevant value of approximately 0.4 s-1. The multiply bound molecules of barnase that are present at high ratios of GroEL to barnase fold with a rate constant that is also sensitive to mutation but is 10 times higher. If the 110-residue barnase can fold when bound to GroEL and many moles can bind simultaneously, then smaller parts of large proteins should be able to fold while bound.
Resumo:
We perform numerical simulations, including parallel tempering, a four-state Potts glass model with binary random quenched couplings using the JANUS application-oriented computer. We find and characterize a glassy transition, estimating the critical temperature and the value of the critical exponents. Nevertheless, the extrapolation to infinite volume is hampered by strong scaling corrections. We show that there is no ferromagnetic transition in a large temperature range around the glassy critical temperature. We also compare our results with those obtained recently on the “random permutation” Potts glass.
Resumo:
Nowadays, data mining is based on low-level specications of the employed techniques typically bounded to a specic analysis platform. Therefore, data mining lacks a modelling architecture that allows analysts to consider it as a truly software-engineering process. Here, we propose a model-driven approach based on (i) a conceptual modelling framework for data mining, and (ii) a set of model transformations to automatically generate both the data under analysis (via data-warehousing technology) and the analysis models for data mining (tailored to a specic platform). Thus, analysts can concentrate on the analysis problem via conceptual data-mining models instead of low-level programming tasks related to the underlying-platform technical details. These tasks are now entrusted to the model-transformations scaffolding.
Resumo:
Data mining is one of the most important analysis techniques to automatically extract knowledge from large amount of data. Nowadays, data mining is based on low-level specifications of the employed techniques typically bounded to a specific analysis platform. Therefore, data mining lacks a modelling architecture that allows analysts to consider it as a truly software-engineering process. Bearing in mind this situation, we propose a model-driven approach which is based on (i) a conceptual modelling framework for data mining, and (ii) a set of model transformations to automatically generate both the data under analysis (that is deployed via data-warehousing technology) and the analysis models for data mining (tailored to a specific platform). Thus, analysts can concentrate on understanding the analysis problem via conceptual data-mining models instead of wasting efforts on low-level programming tasks related to the underlying-platform technical details. These time consuming tasks are now entrusted to the model-transformations scaffolding. The feasibility of our approach is shown by means of a hypothetical data-mining scenario where a time series analysis is required.
Resumo:
In this paper we describe an hybrid algorithm for an even number of processors based on an algorithm for two processors and the Overlapping Partition Method for tridiagonal systems. Moreover, we compare this hybrid method with the Partition Wang’s method in a BSP computer. Finally, we compare the theoretical computation cost of both methods for a Cray T3D computer, using the cost model that BSP model provides.
Resumo:
The so-called parallel multisplitting nonstationary iterative Model A was introduced by Bru, Elsner, and Neumann [Linear Algebra and its Applications 103:175-192 (1988)] for solving a nonsingular linear system Ax = b using a weak nonnegative multisplitting of the first type. In this paper new results are introduced when A is a monotone matrix using a weak nonnegative multisplitting of the second type and when A is a symmetric positive definite matrix using a P -regular multisplitting. Also, nonstationary alternating iterative methods are studied. Finally, combining Model A and alternating iterative methods, two new models of parallel multisplitting nonstationary iterations are introduced. When matrix A is monotone and the multisplittings are weak nonnegative of the first or of the second type, both models lead to convergent schemes. Also, when matrix A is symmetric positive definite and the multisplittings are P -regular, the schemes are also convergent.
Resumo:
In this paper we describe Fénix, a data model for exchanging information between Natural Language Processing applications. The format proposed is intended to be flexible enough to cover both current and future data structures employed in the field of Computational Linguistics. The Fénix architecture is divided into four separate layers: conceptual, logical, persistence and physical. This division provides a simple interface to abstract the users from low-level implementation details, such as programming languages and data storage employed, allowing them to focus in the concepts and processes to be modelled. The Fénix architecture is accompanied by a set of programming libraries to facilitate the access and manipulation of the structures created in this framework. We will also show how this architecture has been already successfully applied in different research projects.
Resumo:
In this work, we present a systematic method for the optimal development of bioprocesses that relies on the combined use of simulation packages and optimization tools. One of the main advantages of our method is that it allows for the simultaneous optimization of all the individual components of a bioprocess, including the main upstream and downstream units. The design task is mathematically formulated as a mixed-integer dynamic optimization (MIDO) problem, which is solved by a decomposition method that iterates between primal and master sub-problems. The primal dynamic optimization problem optimizes the operating conditions, bioreactor kinetics and equipment sizes, whereas the master levels entails the solution of a tailored mixed-integer linear programming (MILP) model that decides on the values of the integer variables (i.e., number of equipments in parallel and topological decisions). The dynamic optimization primal sub-problems are solved via a sequential approach that integrates the process simulator SuperPro Designer® with an external NLP solver implemented in Matlab®. The capabilities of the proposed methodology are illustrated through its application to a typical fermentation process and to the production of the amino acid L-lysine.