900 resultados para Field-based model
Resumo:
We present a geospatial model to predict the radiofrequency electromagnetic field from fixed site transmitters for use in epidemiological exposure assessment. The proposed model extends an existing model toward the prediction of indoor exposure, that is, at the homes of potential study participants. The model is based on accurate operation parameters of all stationary transmitters of mobile communication base stations, and radio broadcast and television transmitters for an extended urban and suburban region in the Basel area (Switzerland). The model was evaluated by calculating Spearman rank correlations and weighted Cohen's kappa (kappa) statistics between the model predictions and measurements obtained at street level, in the homes of volunteers, and in front of the windows of these homes. The correlation coefficients of the numerical predictions with street level measurements were 0.64, with indoor measurements 0.66, and with window measurements 0.67. The kappa coefficients were 0.48 (95%-confidence interval: 0.35-0.61) for street level measurements, 0.44 (95%-CI: 0.32-0.57) for indoor measurements, and 0.53 (95%-CI: 0.42-0.65) for window measurements. Although the modeling of shielding effects by walls and roofs requires considerable simplifications of a complex environment, we found a comparable accuracy of the model for indoor and outdoor points.
Resumo:
This paper presents a kernel density correlation based nonrigid point set matching method and shows its application in statistical model based 2D/3D reconstruction of a scaled, patient-specific model from an un-calibrated x-ray radiograph. In this method, both the reference point set and the floating point set are first represented using kernel density estimates. A correlation measure between these two kernel density estimates is then optimized to find a displacement field such that the floating point set is moved to the reference point set. Regularizations based on the overall deformation energy and the motion smoothness energy are used to constraint the displacement field for a robust point set matching. Incorporating this non-rigid point set matching method into a statistical model based 2D/3D reconstruction framework, we can reconstruct a scaled, patient-specific model from noisy edge points that are extracted directly from the x-ray radiograph by an edge detector. Our experiment conducted on datasets of two patients and six cadavers demonstrates a mean reconstruction error of 1.9 mm
Resumo:
Detailed knowledge of the characteristics of the radiation field shaped by a multileaf collimator (MLC) is essential in intensity modulated radiotherapy (IMRT). A previously developed multiple source model (MSM) for a 6 MV beam was extended to a 15 MV beam and supplemented with an accurate model of an 80-leaf dynamic MLC. Using the supplemented MSM and the MC code GEANT, lateral dose distributions were calculated in a water phantom and a portal water phantom. A field which is normally used for the validation of the step and shoot technique and a field from a realistic IMRT treatment plan delivered with dynamic MLC are investigated. To assess possible spectral changes caused by the modulation of beam intensity by an MLC, the energy spectra in five portal planes were calculated for moving slits of different widths. The extension of the MSM to 15 MV was validated by analysing energy fluences, depth doses and dose profiles. In addition, the MC-calculated primary energy spectrum was verified with an energy spectrum which was reconstructed from transmission measurements. MC-calculated dose profiles using the MSM for the step and shoot case and for the dynamic MLC case are in very good agreement with the measured data from film dosimetry. The investigation of a 13 cm wide field shows an increase in mean photon energy of up to 16% for the 0.25 cm slit compared to the open beam for 6 MV and of up to 6% for 15 MV, respectively. In conclusion, the MSM supplemented with the dynamic MLC has proven to be a powerful tool for investigational and benchmarking purposes or even for dose calculations in IMRT.
Resumo:
The goal of this work is to develop a magnetic-based passive and wireless pressure sensor for use in biomedical applications. Structurally, the pressure sensor, referred to as the magneto-harmonic pressure sensor, is composed of two magnetic elements: a magnetically-soft material acts as a sensing element, and a magnetically hard material acts as a biasing element. Both elements are embedded within a rigid sensor body and sealed with an elastomer pressure membrane. Upon excitation of an externally applied AC magnetic field, the sensing element is capable of producing higher-order magnetic signature that is able to be remotely detected with an external receiving coil. When exposed to environment with changing ambient pressure, the elastomer pressure membrane of pressure sensor is deflected depending on the surrounding pressure. The deflection of elastomer membrane changes the separation distance between the sensing and biasing elements. As a result, the higher-order harmonic signal emitted by the magnetically-soft sensing element is shifted, allowing detection of pressure change by determining the extent of the harmonic shifting. The passive and wireless nature of the sensor is enabled with an external excitation and receiving system consisting of an excitation coil and a receiving coil. These unique characteristics made the sensor suitable to be used for continuous and long-term pressure monitoring, particularly useful for biomedical applications which often require frequent surveillance. In this work, abdominal aortic aneurysm is selected as the disease model for evaluation the performance of pressure sensor and system. Animal model, with subcutaneous sensor implantation in mice, was conducted to demonstrate the efficacy and feasibility of pressure sensor in biological environment.
Resumo:
With a steady increase of regulatory requirements for business processes, automation support of compliance management is a field garnering increasing attention in Information Systems research. Several approaches have been developed to support compliance checking of process models. One major challenge for such approaches is their ability to handle different modeling techniques and compliance rules in order to enable widespread adoption and application. Applying a structured literature search strategy, we reflect and discuss compliance-checking approaches in order to provide an insight into their generalizability and evaluation. The results imply that current approaches mainly focus on special modeling techniques and/or a restricted set of types of compliance rules. Most approaches abstain from real-world evaluation which raises the question of their practical applicability. Referring to the search results, we propose a roadmap for further research in model-based business process compliance checking.
Resumo:
Breaking synoptic-scale Rossby waves (RWB) at the tropopause level are central to the daily weather evolution in the extratropics and the subtropics. RWB leads to pronounced meridional transport of heat, moisture, momentum, and chemical constituents. RWB events are manifest as elongated and narrow structures in the tropopause-level potential vorticity (PV) field. A feature-based validation approach is used to assess the representation of Northern Hemisphere RWB in present-day climate simulations carried out with the ECHAM5-HAM climate model at three different resolutions (T42L19, T63L31, and T106L31) against the ERA-40 reanalysis data set. An objective identification algorithm extracts RWB events from the isentropic PV field and allows quantifying the frequency of occurrence of RWB. The biases in the frequency of RWB are then compared to biases in the time mean tropopause-level jet wind speeds. The ECHAM5-HAM model captures the location of the RWB frequency maxima in the Northern Hemisphere at all three resolutions. However, at coarse resolution (T42L19) the overall frequency of RWB, i.e. the frequency averaged over all seasons and the entire hemisphere, is underestimated by 28%.The higher-resolution simulations capture the overall frequency of RWB much better, with a minor difference between T63L31 and T106L31 (frequency errors of −3.5 and 6%, respectively). The number of large-size RWB events is significantly underestimated by the T42L19 experiment and well represented in the T106L31 simulation. On the local scale, however, significant differences to ERA-40 are found in the higher-resolution simulations. These differences are regionally confined and vary with the season. The most striking difference between T106L31 and ERA-40 is that ECHAM5-HAM overestimates the frequency of RWB in the subtropical Atlantic in all seasons except for spring. This bias maximum is accompanied by an equatorward extension of the subtropical westerlies.
Resumo:
Ocean observing systems and satellites routinely collect a wealth of information on physical conditions in the ocean. With few exceptions, such as chlorophyll concentrations, information on biological properties is harder to measure autonomously. Here, we present a system to produce estimates of the distribution and abundance of the copepod Calanus finmarchicus in the Gulf of Maine. Our system uses satellite-based measurements of sea surface temperature and chlorophyll concentration to determine the developmental and reproductive rates of C. finmarchicus. The rate information then drives a population dynamics model of C. finmarchicus that is embedded in a 2-dimensional circulation field. The first generation of this system produces realistic information on interannual variability in C. finmarchicus distribution and abundance during the winter and spring. The model can also be used to identify key drivers of interannual variability in C. finmarchicus. Experiments with the model suggest that changes in initial conditions are overwhelmed by variability in growth rates after approximately 50 d. Temperature has the largest effect on growth rate. Elevated chlorophyll during the late winter can lead to increased C. finmarchicus abundance during the spring, but the effect of variations in chlorophyll concentrations is secondary to the other inputs. Our system could be used to provide real-time estimates or even forecasts of C. finmarchicus distribution. These estimates could then be used to support management of copepod predators such as herring and right whales.
Resumo:
Dating of sediment cores from the Baltic Sea has proven to be difficult due to uncertainties surrounding the 14C reservoir age and a scarcity of macrofossils suitable for dating. Here we present the results of multiple dating methods carried out on cores in the Gotland Deep area of the Baltic Sea. Particular emphasis is placed on the Littorina stage (8 ka ago to the present) of the Baltic Sea and possible changes in the 14C reservoir age of our dated samples. Three geochronological methods are used. Firstly, palaeomagnetic secular variations (PSV) are reconstructed, whereby ages are transferred to PSV features through comparison with varved lake sediment based PSV records. Secondly, lead (Pb) content and stable isotope analysis are used to identify past peaks in anthropogenic atmospheric Pb pollution. Lastly, 14C determinations were carried out on benthic foraminifera (Elphidium spec.) samples from the brackish Littorina stage of the Baltic Sea. Determinations carried out on smaller samples (as low as 4 µg C) employed an experimental, state-of-the-art method involving the direct measurement of CO2 from samples by a gas ion source without the need for a graphitisation step - the first time this method has been performed on foraminifera in an applied study. The PSV chronology, based on the uppermost Littorina stage sediments, produced ten age constraints between 6.29 and 1.29 cal ka BP, and the Pb depositional analysis produced two age constraints associated with the Medieval pollution peak. Analysis of PSV data shows that adequate directional data can be derived from both the present Littorina saline phase muds and Baltic Ice Lake stage varved glacial sediments. Ferrimagnetic iron sulphides, most likely authigenic greigite (Fe3S4), present in the intermediate Ancylus Lake freshwater stage sediments acquire a gyroremanent magnetisation during static alternating field (AF) demagnetisation, preventing the identification of a primary natural remanent magnetisation for these sediments. An inferred marine reservoir age offset (deltaR) is calculated by comparing the foraminifera 14C determinations to a PSV & Pb age model. This deltaR is found to trend towards younger values upwards in the core, possibly due to a gradual change in hydrographic conditions brought about by a reduction in marine water exchange from the open sea due to continued isostatic rebound.
Resumo:
Purpose: A fully three-dimensional (3D) massively parallelizable list-mode ordered-subsets expectation-maximization (LM-OSEM) reconstruction algorithm has been developed for high-resolution PET cameras. System response probabilities are calculated online from a set of parameters derived from Monte Carlo simulations. The shape of a system response for a given line of response (LOR) has been shown to be asymmetrical around the LOR. This work has been focused on the development of efficient region-search techniques to sample the system response probabilities, which are suitable for asymmetric kernel models, including elliptical Gaussian models that allow for high accuracy and high parallelization efficiency. The novel region-search scheme using variable kernel models is applied in the proposed PET reconstruction algorithm. Methods: A novel region-search technique has been used to sample the probability density function in correspondence with a small dynamic subset of the field of view that constitutes the region of response (ROR). The ROR is identified around the LOR by searching for any voxel within a dynamically calculated contour. The contour condition is currently defined as a fixed threshold over the posterior probability, and arbitrary kernel models can be applied using a numerical approach. The processing of the LORs is distributed in batches among the available computing devices, then, individual LORs are processed within different processing units. In this way, both multicore and multiple many-core processing units can be efficiently exploited. Tests have been conducted with probability models that take into account the noncolinearity, positron range, and crystal penetration effects, that produced tubes of response with varying elliptical sections whose axes were a function of the crystal's thickness and angle of incidence of the given LOR. The algorithm treats the probability model as a 3D scalar field defined within a reference system aligned with the ideal LOR. Results: This new technique provides superior image quality in terms of signal-to-noise ratio as compared with the histogram-mode method based on precomputed system matrices available for a commercial small animal scanner. Reconstruction times can be kept low with the use of multicore, many-core architectures, including multiple graphic processing units. Conclusions: A highly parallelizable LM reconstruction method has been proposed based on Monte Carlo simulations and new parallelization techniques aimed at improving the reconstruction speed and the image signal-to-noise of a given OSEM algorithm. The method has been validated using simulated and real phantoms. A special advantage of the new method is the possibility of defining dynamically the cut-off threshold over the calculated probabilities thus allowing for a direct control on the trade-off between speed and quality during the reconstruction.
Resumo:
Energía termosolar (de concentración) es uno de los nombres que hacen referencia en español al término inglés “concentrating solar power”. Se trata de una tecnología basada en la captura de la potencia térmica de la radiación solar, de forma que permita alcanzar temperaturas capaces de alimentar un ciclo termodinámico convencional (o avanzado); el futuro de esta tecnología depende principalmente de su capacidad para concentrar la radiación solar de manera eficiente y económica. La presente tesis está orientada hacia la resolución de ciertos problemas importantes relacionados con este objetivo. La mencionada necesidad de reducir costes en la concentración de radiación solar directa, asegurando el objetivo termodinámico de calentar un fluido hasta una determinada temperatura, es de vital importancia. Los colectores lineales Fresnel han sido identificados en la literatura científica como una tecnología con gran potencial para alcanzar esta reducción de costes. Dicha tecnología ha sido seleccionada por numerosas razones, entre las que destacan su gran libertad de diseño y su actual estado inmaduro. Con el objetivo de responder a este desafío se desarrollado un detallado estudio de las propiedades ópticas de los colectores lineales Fresnel, para lo cual se han utilizado métodos analíticos y numéricos de manera combinada. En primer lugar, se han usado unos modelos para la predicción de la localización y la irradiación normal directa del sol junto a unas relaciones analíticas desarrolladas para estudiar el efecto de múltiples variables de diseño en la energía incidente sobre los espejos. Del mismo modo, se han obtenido analíticamente los errores debidos al llamado “off-axis aberration”, a la apertura de los rayos reflejados en los espejos y a las sombras y bloqueos entre espejos. Esto ha permitido la comparación de diferentes formas de espejo –planos, circulares o parabólicos–, así como el diseño preliminar de la localización y anchura de los espejos y receptor sin necesidad de costosos métodos numéricos. En segundo lugar, se ha desarrollado un modelo de trazado de rayos de Monte Carlo con el objetivo de comprobar la validez del estudio analítico, pero sobre todo porque este no es preciso en el estudio de la reflexión en espejos. El código desarrollado está específicamente ideado para colectores lineales Fresnel, lo que ha permitido la reducción del tiempo de cálculo en varios órdenes de magnitud en comparación con un programa comercial más general. Esto justifica el desarrollo de un nuevo código en lugar de la compra de una licencia de otro programa. El modelo ha sido usado primeramente para comparar la intensidad de flujo térmico y rendimiento de colectores Fresnel, con y sin reflector secundario, con los colectores cilíndrico parabólicos. Finalmente, la conjunción de los resultados obtenidos en el estudio analítico con el programa numérico ha sido usada para optimizar el campo solar para diferentes orientaciones –Norte-Sur y Este-Oeste–, diferentes localizaciones –Almería y Aswan–, diferentes inclinaciones hacia el Trópico –desde 0 deg hasta 32 deg– y diferentes mínimos de intensidad del flujo en el centro del receptor –10 kW/m2 y 25 kW/m2–. La presente tesis ha conducido a importantes descubrimientos que deben ser considerados a la hora de diseñar un campo solar Fresnel. En primer lugar, los espejos utilizados no deben ser plano, sino cilíndricos o parabólicos, ya que los espejos curvos implican mayores concentraciones y rendimiento. Por otro lado, se ha llegado a la conclusión de que la orientación Este-Oeste es más propicia para localizaciones con altas latitudes, como Almería, mientras que en zonas más cercanas a los trópicos como Aswan los campos Norte-Sur conducen a mayores rendimientos. Es de destacar que la orientación Este-Oeste requiere aproximadamente la mitad de espejos que los campos Norte-Sur, puediendo estar inclinados hacia los Trópicos para mejorar el rendimiento, y que alcanzan parecidos valores de intensidad térmica en el receptor todos los días a mediodía. Sin embargo, los campos con orientación Norte-Sur permiten un flujo más constante a lo largo de un día. Por último, ha sido demostrado que el uso de diseños pre-optimizados analíticamente, con anchura de espejos y espaciado entre espejos variables a lo ancho del campo, pueden implicar aumentos de la energía generada por metro cuadrado de espejos de hasta el 6%. El rendimiento óptico anual de los colectores cilíndrico parabólicos es 23 % mayor que el rendimiento de los campos Fresnel en Almería, mientras que la diferencia es de solo 9 % en Aswan. Ello implica que, para alcanzar el mismo precio de electricidad que la tecnología de referencia, la reducción de costes de instalación por metro cuadrado de espejo debe estar entre el 10 % y el 25 %, y que los colectores lineales Fresnel tienen más posibilidades de ser desarrollados en zonas de bajas latitudes. Como consecuencia de los estudios desarrollados en esta tesis se ha patentado un sistema de almacenamiento que tiene en cuenta la variación del flujo térmico en el receptor a lo largo del día, especialmente para campos con orientación Este-Oeste. Este invento permitiría el aprovechamiento de la energía incidente durante más parte del año, aumentando de manera apreciable los rendimientos óptico y térmico. Abstract Concentrating solar power is the common name of a technology based on capturing the thermal power of solar radiation, in a suitable way to reach temperatures able to activate a conventional (or advanced) thermodynamic cycle to generate electricity; this quest mainly depends on our ability to concentrate solar radiation in a cheap and efficient way. The present thesis is focused to highlight and help solving some of the important issues related to this problem. The need of reducing costs in concentrating the direct solar radiation, but without jeopardizing the thermodynamic objective of heating a fluid up to the required temperature, is of prime importance. Linear Fresnel collectors have been identified in the scientific literature as a technology with high potential to reach this cost reduction. This technology has been selected because of a number of reasons, particularly the degrees of freedom of this type of concentrating configuration and its current immature state. In order to respond to this challenge, a very detailed exercise has been carried out on the optical properties of linear Fresnel collectors. This has been done combining analytic and numerical methods. First, the effect of the design variables on the ratio of energy impinging onto the reflecting surface has been studied using analytically developed equations, together with models that predict the location and direct normal irradiance of the sun at any moment. Similarly, errors due to off-axis aberration, to the aperture of the reflected energy beam and to shading and blocking effects have been obtained analytically. This has allowed the comparison of different shapes of mirrors –flat, cylindrical or parabolic–, as well as a preliminary optimization of the location and width of mirrors and receiver with no need of time-consuming numerical models. Second, in order to prove the validity of the analytic results, but also due to the fact that the study of the reflection process is not precise enough when using analytic equations, a Monte Carlo Ray Trace model has been developed. The developed code is designed specifically for linear Fresnel collectors, which has reduced the computing time by several orders of magnitude compared to a wider commercial software. This justifies the development of the new code. The model has been first used to compare radiation flux intensities and efficiencies of linear Fresnel collectors, both multitube receiver and secondary reflector receiver technologies, with parabolic trough collectors. Finally, the results obtained in the analytic study together with the numeric model have used in order to optimize the solar field for different orientations –North-South and East-West–, different locations –Almería and Aswan–, different tilts of the field towards the Tropic –from 0 deg to 32 deg– and different flux intensity minimum requirements –10 kW/m2 and 25 kW/m2. This thesis work has led to several important findings that should be considered in the design of Fresnel solar fields. First, flat mirrors should not be used in any case, as cylindrical and parabolic mirrors lead to higher flux intensities and efficiencies. Second, it has been concluded that, in locations relatively far from the Tropics such as Almería, East-West embodiments are more efficient, while in Aswan North- South orientation leads to a higher annual efficiency. It must be noted that East-West oriented solar fields require approximately half the number of mirrors than NS oriented fields, can be tilted towards the Equator in order to increase the efficiency and attain similar values of flux intensity at the receiver every day at midday. On the other hand, in NS embodiments the flux intensity is more even during each single day. Finally, it has been proved that the use of analytic designs with variable shift between mirrors and variable width of mirrors across the field can lead to improvements in the electricity generated per reflecting surface square meter up to 6%. The annual optical efficiency of parabolic troughs has been found to be 23% higher than the efficiency of Fresnel fields in Almería, but it is only around 9% higher in Aswan. This implies that, in order to attain the same levelized cost of electricity than parabolic troughs, the required reduction of installation costs per mirror square meter is in the range of 10-25%. Also, it is concluded that linear Fresnel collectors are more suitable for low latitude areas. As a consequence of the studies carried out in this thesis, an innovative storage system has been patented. This system takes into account the variation of the flux intensity along the day, especially for East-West oriented solar fields. As a result, the invention would allow to exploit the impinging radiation along longer time every day, increasing appreciably the optical and thermal efficiencies.
Resumo:
Nowadays robots have made their way into real applications that were prohibitive and unthinkable thirty years ago. This is mainly due to the increase in power computations and the evolution in the theoretical field of robotics and control. Even though there is plenty of information in the current literature on this topics, it is not easy to find clear concepts of how to proceed in order to design and implement a controller for a robot. In general, the design of a controller requires of a complete understanding and knowledge of the system to be controlled. Therefore, for advanced control techniques the systems must be first identified. Once again this particular objective is cumbersome and is never straight forward requiring of great expertise and some criteria must be adopted. On the other hand, the particular problem of designing a controller is even more complex when dealing with Parallel Manipulators (PM), since their closed-loop structures give rise to a highly nonlinear system. Under this basis the current work is developed, which intends to resume and gather all the concepts and experiences involve for the control of an Hydraulic Parallel Manipulator. The main objective of this thesis is to provide a guide remarking all the steps involve in the designing of advanced control technique for PMs. The analysis of the PM under study is minced up to the core of the mechanism: the hydraulic actuators. The actuators are modeled and experimental identified. Additionally, some consideration regarding traditional PID controllers are presented and an adaptive controller is finally implemented. From a macro perspective the kinematic and dynamic model of the PM are presented. Based on the model of the system and extending the adaptive controller of the actuator, a control strategy for the PM is developed and its performance is analyzed with simulation.
Resumo:
PURPOSE The decision-making process plays a key role in organizations. Every decision-making process produces a final choice that may or may not prompt action. Recurrently, decision makers find themselves in the dichotomous question of following a traditional sequence decision-making process where the output of a decision is used as the input of the next stage of the decision, or following a joint decision-making approach where several decisions are taken simultaneously. The implication of the decision-making process will impact different players of the organization. The choice of the decision- making approach becomes difficult to find, even with the current literature and practitioners’ knowledge. The pursuit of better ways for making decisions has been a common goal for academics and practitioners. Management scientists use different techniques and approaches to improve different types of decisions. The purpose of this decision is to use the available resources as well as possible (data and techniques) to achieve the objectives of the organization. The developing and applying of models and concepts may be helpful to solve managerial problems faced every day in different companies. As a result of this research different decision models are presented to contribute to the body of knowledge of management science. The first models are focused on the manufacturing industry and the second part of the models on the health care industry. Despite these models being case specific, they serve the purpose of exemplifying that different approaches to the problems and could provide interesting results. Unfortunately, there is no universal recipe that could be applied to all the problems. Furthermore, the same model could deliver good results with certain data and bad results for other data. A framework to analyse the data before selecting the model to be used is presented and tested in the models developed to exemplify the ideas. METHODOLOGY As the first step of the research a systematic literature review on the joint decision is presented, as are the different opinions and suggestions of different scholars. For the next stage of the thesis, the decision-making process of more than 50 companies was analysed in companies from different sectors in the production planning area at the Job Shop level. The data was obtained using surveys and face-to-face interviews. The following part of the research into the decision-making process was held in two application fields that are highly relevant for our society; manufacturing and health care. The first step was to study the interactions and develop a mathematical model for the replenishment of the car assembly where the problem of “Vehicle routing problem and Inventory” were combined. The next step was to add the scheduling or car production (car sequencing) decision and use some metaheuristics such as ant colony and genetic algorithms to measure if the behaviour is kept up with different case size problems. A similar approach is presented in a production of semiconductors and aviation parts, where a hoist has to change from one station to another to deal with the work, and a jobs schedule has to be done. However, for this problem simulation was used for experimentation. In parallel, the scheduling of operating rooms was studied. Surgeries were allocated to surgeons and the scheduling of operating rooms was analysed. The first part of the research was done in a Teaching hospital, and for the second part the interaction of uncertainty was added. Once the previous problem had been analysed a general framework to characterize the instance was built. In the final chapter a general conclusion is presented. FINDINGS AND PRACTICAL IMPLICATIONS The first part of the contributions is an update of the decision-making literature review. Also an analysis of the possible savings resulting from a change in the decision process is made. Then, the results of the survey, which present a lack of consistency between what the managers believe and the reality of the integration of their decisions. In the next stage of the thesis, a contribution to the body of knowledge of the operation research, with the joint solution of the replenishment, sequencing and inventory problem in the assembly line is made, together with a parallel work with the operating rooms scheduling where different solutions approaches are presented. In addition to the contribution of the solving methods, with the use of different techniques, the main contribution is the framework that is proposed to pre-evaluate the problem before thinking of the techniques to solve it. However, there is no straightforward answer as to whether it is better to have joint or sequential solutions. Following the proposed framework with the evaluation of factors such as the flexibility of the answer, the number of actors, and the tightness of the data, give us important hints as to the most suitable direction to take to tackle the problem. RESEARCH LIMITATIONS AND AVENUES FOR FUTURE RESEARCH In the first part of the work it was really complicated to calculate the possible savings of different projects, since in many papers these quantities are not reported or the impact is based on non-quantifiable benefits. The other issue is the confidentiality of many projects where the data cannot be presented. For the car assembly line problem more computational power would allow us to solve bigger instances. For the operation research problem there was a lack of historical data to perform a parallel analysis in the teaching hospital. In order to keep testing the decision framework it is necessary to keep applying more case studies in order to generalize the results and make them more evident and less ambiguous. The health care field offers great opportunities since despite the recent awareness of the need to improve the decision-making process there are many opportunities to improve. Another big difference with the automotive industry is that the last improvements are not spread among all the actors. Therefore, in the future this research will focus more on the collaboration between academia and the health care sector.
Resumo:
Mixture models implemented via the expectation-maximization (EM) algorithm are being increasingly used in a wide range of problems in pattern recognition such as image segmentation. However, the EM algorithm requires considerable computational time in its application to huge data sets such as a three-dimensional magnetic resonance (MR) image of over 10 million voxels. Recently, it was shown that a sparse, incremental version of the EM algorithm could improve its rate of convergence. In this paper, we show how this modified EM algorithm can be speeded up further by adopting a multiresolution kd-tree structure in performing the E-step. The proposed algorithm outperforms some other variants of the EM algorithm for segmenting MR images of the human brain. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Resumo:
Based on the three-dimensional elastic inclusion model proposed by Dobrovolskii, we developed a rheological inclusion model to study earthquake preparation processes. By using the Corresponding Principle in the theory of rheologic mechanics, we derived the analytic expressions of viscoelastic displacement U(r, t) , V(r, t) and W(r, t), normal strains epsilon(xx) (r, t), epsilon(yy) (r, t) and epsilon(zz) (r, t) and the bulk strain theta (r, t) at an arbitrary point (x, y, z) in three directions of X axis, Y axis and Z axis produced by a three-dimensional inclusion in the semi-infinite rheologic medium defined by the standard linear rheologic model. Subsequent to the spatial-temporal variation of bulk strain being computed on the ground produced by such a spherical rheologic inclusion, interesting results are obtained, suggesting that the bulk strain produced by a hard inclusion change with time according to three stages (alpha, beta, gamma) with different characteristics, similar to that of geodetic deformation observations, but different with the results of a soft inclusion. These theoretical results can be used to explain the characteristics of spatial-temporal evolution, patterns, quadrant-distribution of earthquake precursors, the changeability, spontaneity and complexity of short-term and imminent-term precursors. It offers a theoretical base to build physical models for earthquake precursors and to predict the earthquakes.
Resumo:
This paper presents a novel approach to the computation of primitive geometrical structures, where no prior knowledge about the visual scene is available and a high level of noise is expected. We based our work on the grouping principles of proximity and similarity, of points and preliminary models. The former was realized using Minimum Spanning Trees (MST), on which we apply a stable alignment and goodness of fit criteria. As for the latter, we used spectral clustering of preliminary models. The algorithm can be generalized to various model fitting settings, without tuning of run parameters. Experiments demonstrate the significant improvement in the localization accuracy of models in plane, homography and motion segmentation examples. The efficiency of the algorithm is not dependent on fine tuning of run parameters like most others in the field.