897 resultados para Simulation-based methods


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many studies have shown the considerable potential for the application of remote-sensing-based methods for deriving estimates of lake water quality. However, the reliable application of these methods across time and space is complicated by the diversity of lake types, sensor configuration, and the multitude of different algorithms proposed. This study tested one operational and 46 empirical algorithms sourced from the peer-reviewed literature that have individually shown potential for estimating lake water quality properties in the form of chlorophyll-a (algal biomass) and Secchi disc depth (SDD) (water transparency) in independent studies. Nearly half (19) of the algorithms were unsuitable for use with the remote-sensing data available for this study. The remaining 28 were assessed using the Terra/Aqua satellite archive to identify the best performing algorithms in terms of accuracy and transferability within the period 2001–2004 in four test lakes, namely Vänern, Vättern, Geneva, and Balaton. These lakes represent the broad continuum of large European lake types, varying in terms of eco-region (latitude/longitude and altitude), morphology, mixing regime, and trophic status. All algorithms were tested for each lake separately and combined to assess the degree of their applicability in ecologically different sites. None of the algorithms assessed in this study exhibited promise when all four lakes were combined into a single data set and most algorithms performed poorly even for specific lake types. A chlorophyll-a retrieval algorithm originally developed for eutrophic lakes showed the most promising results (R2 = 0.59) in oligotrophic lakes. Two SDD retrieval algorithms, one originally developed for turbid lakes and the other for lakes with various characteristics, exhibited promising results in relatively less turbid lakes (R2 = 0.62 and 0.76, respectively). The results presented here highlight the complexity associated with remotely sensed lake water quality estimates and the high degree of uncertainty due to various limitations, including the lake water optical properties and the choice of methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes an implementation of a method capable of integrating parametric, feature based, CAD models based on commercial software (CATIA) with the SU2 software framework. To exploit the adjoint based methods for aerodynamic optimisation within the SU2, a formulation to obtain geometric sensitivities directly from the commercial CAD parameterisation is introduced, enabling the calculation of gradients with respect to CAD based design variables. To assess the accuracy and efficiency of the alternative approach, two aerodynamic optimisation problems are investigated: an inviscid, 3D, problem with multiple constraints, and a 2D high-lift aerofoil, viscous problem without any constraints. Initial results show the new parameterisation obtaining reliable optimums, with similar levels of performance of the software native parameterisations. In the final paper, details of computing CAD sensitivities will be provided, including accuracy as well as linking geometric sensitivities to aerodynamic objective functions and constraints; the impact in the robustness of the overall method will be assessed and alternative parameterisations will be included.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Die Montage von Produkten in getakteten Fließstraßen als Alternative zu ortsfesten Varianten führt in der Regel zur Verbesserung der logistischen Zielerfüllung. Das volle Potential dieser Organisationsart wird jedoch weitgehend nur für standardisierte Produkte mit hohen Stückzahlen erschlossen. Hindernisse zur Produktion variantenreicher Großerzeugnisse mit volatilen Arbeitsinhalten in einer Fließstraße bestehen vor allem in der Beherrschung der Komplexität in Bezug auf dazu erforderliche flexible Arbeitszeitmodelle, optimierte Produktreihenfolgen sowie operative Taktzeitvariationen. Sollen mehrere Fließstraßen mit differenzierten Produktionsparametern gekoppelt werden, steigt der Anspruch noch einmal erheblich. Bei der Konzipierung und Erprobung geeigneter Organisationsmodelle auf Basis diverser Produktionsszenarien kann auf den Einsatz der Simulationstechnik nicht verzichtet werden.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

[EN]Vision-based applications designed for humanmachine interaction require fast and accurate hand detection. However, previous works on this field assume different constraints, like a limitation in the number of detected gestures, because hands are highly complex objects to locate. This paper presents an approach which changes the detection target without limiting the number of detected gestures. Using a cascade classifier we detect hands based on their wrists. With this approach, we introduce two main contributions: (1) a reliable segmentation, independently of the gesture being made and (2) a training phase faster than previous cascade classifier based methods. The paper includes experimental evaluations with different video streams that illustrate the efficiency and suitability for perceptual interfaces.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective Leadership is particularly important in complex highly interprofessional health care contexts involving a number of staff, some from the same specialty (intraprofessional), and others from different specialties (interprofessional). The authors recently published the concept of “The Burns Suite” (TBS) as a novel simulation tool to deliver interprofessional and teamwork training. It is unclear which leadership behaviors are the most important in an interprofessional burns resuscitation scenario, and whether they can be modeled on to current leadership theory. The purpose of this study was to perform a comprehensive video analysis of leadership behaviors within TBS. Methods A total of 3 burns resuscitation simulations within TBS were recorded. The video analysis was grounded-theory inspired. Using predefined criteria, actions/interactions deemed as leadership behaviors were identified. Using an inductive iterative process, 8 main leadership behaviors were identified. Cohen’s κ coefficient was used to measure inter-rater agreement and calculated as κ = 0.7 (substantial agreement). Each video was watched 4 times, focusing on 1 of the 4 team members per viewing (senior surgeon, senior nurse, trainee surgeon, and trainee nurse). The frequency and types of leadership behavior of each of the 4 team members were recorded. Statistical significance to assess any differences was assessed using analysis of variance, whereby a p < 0.05 was taken to be significant. Leadership behaviors were triangulated with verbal cues and actions from the videos. Results All 3 scenarios were successfully completed. The mean scenario length was 22 minutes. A total of 362 leadership behaviors were recorded from the 12 participants. The most evident leadership behaviors of all team members were adhering to guidelines (which effectively equates to following Advanced Trauma and Life Support/Emergency Management of Severe Burns resuscitation guidelines and hence “maintaining standards”), followed by making decisions. Although in terms of total frequency the senior surgeon engaged in more leadership behaviors compared with the entire team, statistically there was no significant difference between all 4 members within the 8 leadership categories. This analysis highlights that “distributed leadership” was predominant, whereby leadership was “distributed” or “shared” among team members. The leadership behaviors within TBS also seemed to fall in line with the “direction, alignment, and commitment” ontology. Conclusions Effective leadership is essential for successful functioning of work teams and accomplishment of task goals. As the resuscitation of a patient with major burns is a dynamic event, team leaders require flexibility in their leadership behaviors to effectively adapt to changing situations. Understanding leadership behaviors of different team members within an authentic simulation can identify important behaviors required to optimize nontechnical skills in a major resuscitation. Furthermore, attempting to map these behaviors on to leadership models can help further our understanding of leadership theory. Collectively this can aid the development of refined simulation scenarios for team members, and can be extrapolated into other areas of simulation-based team training and interprofessional education.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The estimating of the relative orientation and position of a camera is one of the integral topics in the field of computer vision. The accuracy of a certain Finnish technology company’s traffic sign inventory and localization process can be improved by utilizing the aforementioned concept. The company’s localization process uses video data produced by a vehicle installed camera. The accuracy of estimated traffic sign locations depends on the relative orientation between the camera and the vehicle. This thesis proposes a computer vision based software solution which can estimate a camera’s orientation relative to the movement direction of the vehicle by utilizing video data. The task was solved by using feature-based methods and open source software. When using simulated data sets, the camera orientation estimates had an absolute error of 0.31 degrees on average. The software solution can be integrated to be a part of the traffic sign localization pipeline of the company in question.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Understanding how aquatic species grow is fundamental in fisheries because stock assessment often relies on growth dependent statistical models. Length-frequency-based methods become important when more applicable data for growth model estimation are either not available or very expensive. In this article, we develop a new framework for growth estimation from length-frequency data using a generalized von Bertalanffy growth model (VBGM) framework that allows for time-dependent covariates to be incorporated. A finite mixture of normal distributions is used to model the length-frequency cohorts of each month with the means constrained to follow a VBGM. The variances of the finite mixture components are constrained to be a function of mean length, reducing the number of parameters and allowing for an estimate of the variance at any length. To optimize the likelihood, we use a minorization–maximization (MM) algorithm with a Nelder–Mead sub-step. This work was motivated by the decline in catches of the blue swimmer crab (BSC) (Portunus armatus) off the east coast of Queensland, Australia. We test the method with a simulation study and then apply it to the BSC fishery data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Due to the growth of design size and complexity, design verification is an important aspect of the Logic Circuit development process. The purpose of verification is to validate that the design meets the system requirements and specification. This is done by either functional or formal verification. The most popular approach to functional verification is the use of simulation based techniques. Using models to replicate the behaviour of an actual system is called simulation. In this thesis, a software/data structure architecture without explicit locks is proposed to accelerate logic gate circuit simulation. We call thus system ZSIM. The ZSIM software architecture simulator targets low cost SIMD multi-core machines. Its performance is evaluated on the Intel Xeon Phi and 2 other machines (Intel Xeon and AMD Opteron). The aim of these experiments is to: • Verify that the data structure used allows SIMD acceleration, particularly on machines with gather instructions ( section 5.3.1). • Verify that, on sufficiently large circuits, substantial gains could be made from multicore parallelism ( section 5.3.2 ). • Show that a simulator using this approach out-performs an existing commercial simulator on a standard workstation ( section 5.3.3 ). • Show that the performance on a cheap Xeon Phi card is competitive with results reported elsewhere on much more expensive super-computers ( section 5.3.5 ). To evaluate the ZSIM, two types of test circuits were used: 1. Circuits from the IWLS benchmark suit [1] which allow direct comparison with other published studies of parallel simulators.2. Circuits generated by a parametrised circuit synthesizer. The synthesizer used an algorithm that has been shown to generate circuits that are statistically representative of real logic circuits. The synthesizer allowed testing of a range of very large circuits, larger than the ones for which it was possible to obtain open source files. The experimental results show that with SIMD acceleration and multicore, ZSIM gained a peak parallelisation factor of 300 on Intel Xeon Phi and 11 on Intel Xeon. With only SIMD enabled, ZSIM achieved a maximum parallelistion gain of 10 on Intel Xeon Phi and 4 on Intel Xeon. Furthermore, it was shown that this software architecture simulator running on a SIMD machine is much faster than, and can handle much bigger circuits than a widely used commercial simulator (Xilinx) running on a workstation. The performance achieved by ZSIM was also compared with similar pre-existing work on logic simulation targeting GPUs and supercomputers. It was shown that ZSIM simulator running on a Xeon Phi machine gives comparable simulation performance to the IBM Blue Gene supercomputer at very much lower cost. The experimental results have shown that the Xeon Phi is competitive with simulation on GPUs and allows the handling of much larger circuits than have been reported for GPU simulation. When targeting Xeon Phi architecture, the automatic cache management of the Xeon Phi, handles and manages the on-chip local store without any explicit mention of the local store being made in the architecture of the simulator itself. However, targeting GPUs, explicit cache management in program increases the complexity of the software architecture. Furthermore, one of the strongest points of the ZSIM simulator is its portability. Note that the same code was tested on both AMD and Xeon Phi machines. The same architecture that efficiently performs on Xeon Phi, was ported into a 64 core NUMA AMD Opteron. To conclude, the two main achievements are restated as following: The primary achievement of this work was proving that the ZSIM architecture was faster than previously published logic simulators on low cost platforms. The secondary achievement was the development of a synthetic testing suite that went beyond the scale range that was previously publicly available, based on prior work that showed the synthesis technique is valid.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The use of chemical control measures to reduce the impact of parasite and pest species has frequently resulted in the development of resistance. Thus, resistance management has become a key concern in human and veterinary medicine, and in agricultural production. Although it is known that factors such as gene flow between susceptible and resistant populations, drug type, application methods, and costs of resistance can affect the rate of resistance evolution, less is known about the impacts of density-dependent eco-evolutionary processes that could be altered by drug-induced mortality. The overall aim of this thesis was to take an experimental evolution approach to assess how life history traits respond to drug selection, using a free-living dioecious worm (Caenorhabditis remanei) as a model. In Chapter 2, I defined the relationship between C. remanei survival and Ivermectin dose over a range of concentrations, in order to control the intensity of selection used in the selection experiment described in Chapter 4. The dose-response data were also used to appraise curve-fitting methods, using Akaike Information Criterion (AIC) model selection to compare a series of nonlinear models. The type of model fitted to the dose response data had a significant effect on the estimates of LD50 and LD99, suggesting that failure to fit an appropriate model could give misleading estimates of resistance status. In addition, simulated data were used to establish that a potential cost of resistance could be predicted by comparing survival at the upper asymptote of dose-response curves for resistant and susceptible populations, even when differences were as low as 4%. This approach to dose-response modeling ensures that the maximum amount of useful information relating to resistance is gathered in one study. In Chapter 3, I asked how simulations could be used to inform important design choices used in selection experiments. Specifically, I focused on the effects of both within- and between-line variation on estimated power, when detecting small, medium and large effect sizes. Using mixed-effect models on simulated data, I demonstrated that commonly used designs with realistic levels of variation could be underpowered for substantial effect sizes. Thus, use of simulation-based power analysis provides an effective way to avoid under or overpowering a study designs incorporating variation due to random effects. In Chapter 4, I 3 investigated how Ivermectin dosage and changes in population density affect the rate of resistance evolution. I exposed replicate lines of C. remanei to two doses of Ivermectin (high and low) to assess relative survival of lines selected in drug-treated environments compared to untreated controls over 10 generations. Additionally, I maintained lines where mortality was imposed randomly to control for differences in density between drug treatments and to distinguish between the evolutionary consequences of drug treatment versus ecological processes affected by changes in density-dependent feedback. Intriguingly, both drug-selected and random-mortality lines showed an increase in survivorship when challenged with Ivermectin; the magnitude of this increase varied with the intensity of selection and life-history stage. The results suggest that interactions between density-dependent processes and life history may mediate evolved changes in susceptibility to control measures, which could result in misleading conclusions about the evolution of heritable resistance following drug treatment. In Chapter 5, I investigated whether the apparent changes in drug susceptibility found in Chapter 4 were related to evolved changes in life-history of C. remanei populations after selection in drug-treated and random-mortality environments. Rapid passage of lines in the drug-free environment had no effect on the measured life-history traits. In the drug-free environment, adult size and fecundity of drug-selected lines increased compared to the controls but drug selection did not affect lifespan. In the treated environment, drug-selected lines showed increased lifespan and fecundity relative to controls. Adult size of randomly culled lines responded in a similar way to drug-selected lines in the drug-free environment, but no change in fecundity or lifespan was observed in either environment. The results suggest that life histories of nematodes can respond to selection as a result of the application of control measures. Failure to take these responses into account when applying control measures could result in adverse outcomes, such as larger and more fecund parasites, as well as over-estimation of the development of genetically controlled resistance. In conclusion, my thesis shows that there may be a complex relationship between drug selection, density-dependent regulatory processes and life history of populations challenged with control measures. This relationship could have implications for how resistance is monitored and managed if life histories of parasitic species show such eco-evolutionary responses to drug application.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Mass spectrometry (MS)-based proteomics has seen significant technical advances during the past two decades and mass spectrometry has become a central tool in many biosciences. Despite the popularity of MS-based methods, the handling of the systematic non-biological variation in the data remains a common problem. This biasing variation can result from several sources ranging from sample handling to differences caused by the instrumentation. Normalization is the procedure which aims to account for this biasing variation and make samples comparable. Many normalization methods commonly used in proteomics have been adapted from the DNA-microarray world. Studies comparing normalization methods with proteomics data sets using some variability measures exist. However, a more thorough comparison looking at the quantitative and qualitative differences of the performance of the different normalization methods and at their ability in preserving the true differential expression signal of proteins, is lacking. In this thesis, several popular and widely used normalization methods (the Linear regression normalization, Local regression normalization, Variance stabilizing normalization, Quantile-normalization, Median central tendency normalization and also variants of some of the forementioned methods), representing different strategies in normalization are being compared and evaluated with a benchmark spike-in proteomics data set. The normalization methods are evaluated in several ways. The performance of the normalization methods is evaluated qualitatively and quantitatively on a global scale and in pairwise comparisons of sample groups. In addition, it is investigated, whether performing the normalization globally on the whole data or pairwise for the comparison pairs examined, affects the performance of the normalization method in normalizing the data and preserving the true differential expression signal. In this thesis, both major and minor differences in the performance of the different normalization methods were found. Also, the way in which the normalization was performed (global normalization of the whole data or pairwise normalization of the comparison pair) affected the performance of some of the methods in pairwise comparisons. Differences among variants of the same methods were also observed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The overarching theme of this thesis is mesoscale optical and optoelectronic design of photovoltaic and photoelectrochemical devices. In a photovoltaic device, light absorption and charge carrier transport are coupled together on the mesoscale, and in a photoelectrochemical device, light absorption, charge carrier transport, catalysis, and solution species transport are all coupled together on the mesoscale. The work discussed herein demonstrates that simulation-based mesoscale optical and optoelectronic modeling can lead to detailed understanding of the operation and performance of these complex mesostructured devices, serve as a powerful tool for device optimization, and efficiently guide device design and experimental fabrication efforts. In-depth studies of two mesoscale wire-based device designs illustrate these principles—(i) an optoelectronic study of a tandem Si|WO3 microwire photoelectrochemical device, and (ii) an optical study of III-V nanowire arrays.

The study of the monolithic, tandem, Si|WO3 microwire photoelectrochemical device begins with development and validation of an optoelectronic model with experiment. This study capitalizes on synergy between experiment and simulation to demonstrate the model’s predictive power for extractable device voltage and light-limited current density. The developed model is then used to understand the limiting factors of the device and optimize its optoelectronic performance. The results of this work reveal that high fidelity modeling can facilitate unequivocal identification of limiting phenomena, such as parasitic absorption via excitation of a surface plasmon-polariton mode, and quick design optimization, achieving over a 300% enhancement in optoelectronic performance over a nominal design for this device architecture, which would be time-consuming and challenging to do via experiment.

The work on III-V nanowire arrays also starts as a collaboration of experiment and simulation aimed at gaining understanding of unprecedented, experimentally observed absorption enhancements in sparse arrays of vertically-oriented GaAs nanowires. To explain this resonant absorption in periodic arrays of high index semiconductor nanowires, a unified framework that combines a leaky waveguide theory perspective and that of photonic crystals supporting Bloch modes is developed in the context of silicon, using both analytic theory and electromagnetic simulations. This detailed theoretical understanding is then applied to a simulation-based optimization of light absorption in sparse arrays of GaAs nanowires. Near-unity absorption in sparse, 5% fill fraction arrays is demonstrated via tapering of nanowires and multiple wire radii in a single array. Finally, experimental efforts are presented towards fabrication of the optimized array geometries. A hybrid self-catalyzed and selective area MOCVD growth method is used to establish morphology control of GaP nanowire arrays. Similarly, morphology and pattern control of nanowires is demonstrated with ICP-RIE of InP. Optical characterization of the InP nanowire arrays gives proof of principle that tapering and multiple wire radii can lead to near-unity absorption in sparse arrays of InP nanowires.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Mestrado Vinifera Euromaster - Instituto Superior de Agronomia - UL

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Predicting accurate bond length alternations (BLAs) in long conjugated oligomers has been a significant challenge for electronic-structure methods for many decades, made particularly important by the close relationships between BLA and the rich optoelectronic properties of π-delocalized systems. Here, we test the accuracy of recently developed, and increasingly popular, double hybrid (DH) functionals, positioned at the top of Jacobs Ladder of DFT methods of increasing sophistication, computational cost, and accuracy, due to incorporation of MP2 correlation energy. Our test systems comprise oligomeric series of polyacetylene, polymethineimine, and polysilaacetylene up to six units long. MP2 calculations reveal a pronounced shift in BLAs between the 6-31G(d) basis set used in many studies of BLA to date and the larger cc-pVTZ basis set, but only modest shifts between cc-pVTZ and aug-cc-pVQZ results. We hence perform new reference CCSD(T)/cc-pVTZ calculations for all three series of oligomers against which we assess the performance of several families of DH functionals based on BLYP, PBE, and TPSS, along with lower-rung relatives including global- and range-separated hybrids. Our results show that DH functionals systematically improve the accuracy of BLAs relative to single hybrid functionals. xDH-PBE0 (N4 scaling using SOS-MP2) emerges as a DH functional rivaling the BLA accuracy of SCS-MP2 (N5 scaling), which was found to offer the best compromise between computational cost and accuracy the last time the BLA accuracy of DFT- and wave function-based methods was systematically investigated. Interestingly, xDH-PBE0 (XYG3), which differs to other DHs in that its MP2 term uses PBE0 (B3LYP) orbitals that are not self-consistent with the DH functional, is an outlier of trends of decreasing average BLA errors with increasing fractions of MP2 correlation and HF exchange.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With recent advances in remote sensing processing technology, it has become more feasible to begin analysis of the enormous historic archive of remotely sensed data. This historical data provides valuable information on a wide variety of topics which can influence the lives of millions of people if processed correctly and in a timely manner. One such field of benefit is that of landslide mapping and inventory. This data provides a historical reference to those who live near high risk areas so future disasters may be avoided. In order to properly map landslides remotely, an optimum method must first be determined. Historically, mapping has been attempted using pixel based methods such as unsupervised and supervised classification. These methods are limited by their ability to only characterize an image spectrally based on single pixel values. This creates a result prone to false positives and often without meaningful objects created. Recently, several reliable methods of Object Oriented Analysis (OOA) have been developed which utilize a full range of spectral, spatial, textural, and contextual parameters to delineate regions of interest. A comparison of these two methods on a historical dataset of the landslide affected city of San Juan La Laguna, Guatemala has proven the benefits of OOA methods over those of unsupervised classification. Overall accuracies of 96.5% and 94.3% and F-score of 84.3% and 77.9% were achieved for OOA and unsupervised classification methods respectively. The greater difference in F-score is a result of the low precision values of unsupervised classification caused by poor false positive removal, the greatest shortcoming of this method.