591 resultados para Monotone Iterations


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Projects for the developing world usually find themselves at the bottom of an engineer’s priority list. There is often very little engineering effort placed on creating new products for the poorest people in the world. This trend is beginning to change now as people begin to recognize the potential for these projects. Engineers are beginning to try and solve some of the direst issues in the developing world and many are having positive impacts. However, the conditions needed to support these projects can only be maintained in the short term. There is now a need for greater sustainability. Sustainability has a wide variety of definitions in both business and engineering. These concepts are analyzed and synthesized to develop a broad meaning of sustainability in the developing world. This primarily stems from the “triple bottom line” concept of economic, social, and environmental sustainability. Using this model and several international standards, this thesis develops a metric for guiding and evaluating the sustainability of engineering projects. The metric contains qualitative questions that investigate the sustainability of a project. It is used to assess several existing projects in order to determine flaws. Specifically, three projects seeking to deliver eyeglasses are analyzed for weaknesses to help define a new design approach for achieving better results. Using the metric as a guiding tool, teams designed two pieces of optometry equipment: one to cut lenses for eyeglasses and the other to diagnose refractive error, or prescription. These designs are created and prototyped in the developed and developing worlds in order to determine general feasibility. Although there is a recognized need for eventual design iterations, the whole project is evaluated using the developed metric and compared to the existing projects. Overall, the success demonstrates the improvements made to the long-term sustainability of the project resulting from the use of the sustainability metric.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present study was conducted to estimate the direct losses due to Neospora caninum in Swiss dairy cattle and to assess the costs and benefits of different potential control strategies. A Monte Carlo simulation spreadsheet module was developed to estimate the direct costs caused by N. caninum, with and without control strategies, and to estimate the costs of these control strategies in a financial analysis. The control strategies considered were "testing and culling of seropositive female cattle", "discontinued breeding with offspring from seropositive cows", "chemotherapeutical treatment of female offspring" and "vaccination of all female cattle". Each parameter in the module that was considered to be uncertain, was described using probability distributions. The simulations were run with 20,000 iterations over a time period of 25 years. The median annual losses due to N. caninum in the Swiss dairy cow population were estimated to be euro 9.7 million euros. All control strategies that required yearly serological testing of all cattle in the population produced high costs and thus were not financially profitable. Among the other control strategies, two showed benefit-cost ratios (BCR) >1 and positive net present values (NPV): "Discontinued breeding with offspring from seropositive cows" (BCR=1.29, NPV=25 million euros ) and "chemotherapeutical treatment of all female offspring" (BCR=2.95, NPV=59 million euros). In economic terms, the best control strategy currently available would therefore be "discontinued breeding with offspring from seropositive cows".

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We derive a new class of iterative schemes for accelerating the convergence of the EM algorithm, by exploiting the connection between fixed point iterations and extrapolation methods. First, we present a general formulation of one-step iterative schemes, which are obtained by cycling with the extrapolation methods. We, then square the one-step schemes to obtain the new class of methods, which we call SQUAREM. Squaring a one-step iterative scheme is simply applying it twice within each cycle of the extrapolation method. Here we focus on the first order or rank-one extrapolation methods for two reasons, (1) simplicity, and (2) computational efficiency. In particular, we study two first order extrapolation methods, the reduced rank extrapolation (RRE1) and minimal polynomial extrapolation (MPE1). The convergence of the new schemes, both one-step and squared, is non-monotonic with respect to the residual norm. The first order one-step and SQUAREM schemes are linearly convergent, like the EM algorithm but they have a faster rate of convergence. We demonstrate, through five different examples, the effectiveness of the first order SQUAREM schemes, SqRRE1 and SqMPE1, in accelerating the EM algorithm. The SQUAREM schemes are also shown to be vastly superior to their one-step counterparts, RRE1 and MPE1, in terms of computational efficiency. The proposed extrapolation schemes can fail due to the numerical problems of stagnation and near breakdown. We have developed a new hybrid iterative scheme that combines the RRE1 and MPE1 schemes in such a manner that it overcomes both stagnation and near breakdown. The squared first order hybrid scheme, SqHyb1, emerges as the iterative scheme of choice based on our numerical experiments. It combines the fast convergence of the SqMPE1, while avoiding near breakdowns, with the stability of SqRRE1, while avoiding stagnations. The SQUAREM methods can be incorporated very easily into an existing EM algorithm. They only require the basic EM step for their implementation and do not require any other auxiliary quantities such as the complete data log likelihood, and its gradient or hessian. They are an attractive option in problems with a very large number of parameters, and in problems where the statistical model is complex, the EM algorithm is slow and each EM step is computationally demanding.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation discusses structural-electrostatic modeling techniques, genetic algorithm based optimization and control design for electrostatic micro devices. First, an alternative modeling technique, the interpolated force model, for electrostatic micro devices is discussed. The method provides improved computational efficiency relative to a benchmark model, as well as improved accuracy for irregular electrode configurations relative to a common approximate model, the parallel plate approximation model. For the configuration most similar to two parallel plates, expected to be the best case scenario for the approximate model, both the parallel plate approximation model and the interpolated force model maintained less than 2.2% error in static deflection compared to the benchmark model. For the configuration expected to be the worst case scenario for the parallel plate approximation model, the interpolated force model maintained less than 2.9% error in static deflection while the parallel plate approximation model is incapable of handling the configuration. Second, genetic algorithm based optimization is shown to improve the design of an electrostatic micro sensor. The design space is enlarged from published design spaces to include the configuration of both sensing and actuation electrodes, material distribution, actuation voltage and other geometric dimensions. For a small population, the design was improved by approximately a factor of 6 over 15 generations to a fitness value of 3.2 fF. For a larger population seeded with the best configurations of the previous optimization, the design was improved by another 7% in 5 generations to a fitness value of 3.0 fF. Third, a learning control algorithm is presented that reduces the closing time of a radiofrequency microelectromechanical systems switch by minimizing bounce while maintaining robustness to fabrication variability. Electrostatic actuation of the plate causes pull-in with high impact velocities, which are difficult to control due to parameter variations from part to part. A single degree-of-freedom model was utilized to design a learning control algorithm that shapes the actuation voltage based on the open/closed state of the switch. Experiments on 3 test switches show that after 5-10 iterations, the learning algorithm lands the switch with an impact velocity not exceeding 0.2 m/s, eliminating bounce.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis develops high performance real-time signal processing modules for direction of arrival (DOA) estimation for localization systems. It proposes highly parallel algorithms for performing subspace decomposition and polynomial rooting, which are otherwise traditionally implemented using sequential algorithms. The proposed algorithms address the emerging need for real-time localization for a wide range of applications. As the antenna array size increases, the complexity of signal processing algorithms increases, making it increasingly difficult to satisfy the real-time constraints. This thesis addresses real-time implementation by proposing parallel algorithms, that maintain considerable improvement over traditional algorithms, especially for systems with larger number of antenna array elements. Singular value decomposition (SVD) and polynomial rooting are two computationally complex steps and act as the bottleneck to achieving real-time performance. The proposed algorithms are suitable for implementation on field programmable gated arrays (FPGAs), single instruction multiple data (SIMD) hardware or application specific integrated chips (ASICs), which offer large number of processing elements that can be exploited for parallel processing. The designs proposed in this thesis are modular, easily expandable and easy to implement. Firstly, this thesis proposes a fast converging SVD algorithm. The proposed method reduces the number of iterations it takes to converge to correct singular values, thus achieving closer to real-time performance. A general algorithm and a modular system design are provided making it easy for designers to replicate and extend the design to larger matrix sizes. Moreover, the method is highly parallel, which can be exploited in various hardware platforms mentioned earlier. A fixed point implementation of proposed SVD algorithm is presented. The FPGA design is pipelined to the maximum extent to increase the maximum achievable frequency of operation. The system was developed with the objective of achieving high throughput. Various modern cores available in FPGAs were used to maximize the performance and details of these modules are presented in detail. Finally, a parallel polynomial rooting technique based on Newton’s method applicable exclusively to root-MUSIC polynomials is proposed. Unique characteristics of root-MUSIC polynomial’s complex dynamics were exploited to derive this polynomial rooting method. The technique exhibits parallelism and converges to the desired root within fixed number of iterations, making this suitable for polynomial rooting of large degree polynomials. We believe this is the first time that complex dynamics of root-MUSIC polynomial were analyzed to propose an algorithm. In all, the thesis addresses two major bottlenecks in a direction of arrival estimation system, by providing simple, high throughput, parallel algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the realm of computer programming, the experience of writing a program is used to reinforce concepts and evaluate ability. This research uses three case studies to evaluate the introduction of testing through Kolb's Experiential Learning Model (ELM). We then analyze the impact of those testing experiences to determine methods for improving future courses. The first testing experience that students encounter are unit test reports in their early courses. This course demonstrates that automating and improving feedback can provide more ELM iterations. The JUnit Generation (JUG) tool also provided a positive experience for the instructor by reducing the overall workload. Later, undergraduate and graduate students have the opportunity to work together in a multi-role Human-Computer Interaction (HCI) course. The interactions use usability analysis techniques with graduate students as usability experts and undergraduate students as design engineers. Students get experience testing the user experience of their product prototypes using methods varying from heuristic analysis to user testing. From this course, we learned the importance of the instructors role in the ELM. As more roles were added to the HCI course, a desire arose to provide more complete, quality assured software. This inspired the addition of unit testing experiences to the course. However, we learned that significant preparations must be made to apply the ELM when students are resistant. The research presented through these courses was driven by the recognition of a need for testing in a Computer Science curriculum. Our understanding of the ELM suggests the need for student experience when being introduced to testing concepts. We learned that experiential learning, when appropriately implemented, can provide benefits to the Computer Science classroom. When examined together, these course-based research projects provided insight into building strong testing practices into a curriculum.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The maximum principle is an important property of solutions to PDE. Correspondingly, it's of great interest for people to design a high order numerical scheme solving PDE with this property maintained. In this thesis, our particular interest is solving convection-dominated diffusion equation. We first review a nonconventional maximum principle preserving(MPP) high order finite volume(FV) WENO scheme, and then propose a new parametrized MPP high order finite difference(FD) WENO framework, which is generalized from the one solving hyperbolic conservation laws. A formal analysis is presented to show that a third order finite difference scheme with this parametrized MPP flux limiters maintains the third order accuracy without extra CFL constraint when the low order monotone flux is chosen appropriately. Numerical tests in both one and two dimensional cases are performed on the simulation of the incompressible Navier-Stokes equations in vorticity stream-function formulation and several other problems to show the effectiveness of the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Non-monotone incentive structures, which - according to theory - are able to induce optimal behavior, are often regarded as empirically less relevant for labor relationships. We compare the performance of a theoretically optimal non-monotone contract with a monotone one under controlled laboratory conditions. Implementing some features relevant to real-world employment relationships, our paper demonstrates that, in fact, the frequency of income-maximizing decisions made by agents is higher under the monotone contract. Although this observed behavior does not change the superiority of the non-monotone contract for principals, they do not choose this contract type in a significant way. This is what we call the monotonicity puzzle. Detailed investigations of decisions provide a clue for solving the puzzle and a possible explanation for the popularity of monotone contracts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The considerable search for synergistic agents in cancer research is motivated by the therapeutic benefits achieved by combining anti-cancer agents. Synergistic agents make it possible to reduce dosage while maintaining or enhancing a desired effect. Other favorable outcomes of synergistic agents include reduction in toxicity and minimizing or delaying drug resistance. Dose-response assessment and drug-drug interaction analysis play an important part in the drug discovery process, however analysis are often poorly done. This dissertation is an effort to notably improve dose-response assessment and drug-drug interaction analysis. The most commonly used method in published analysis is the Median-Effect Principle/Combination Index method (Chou and Talalay, 1984). The Median-Effect Principle/Combination Index method leads to inefficiency by ignoring important sources of variation inherent in dose-response data and discarding data points that do not fit the Median-Effect Principle. Previous work has shown that the conventional method yields a high rate of false positives (Boik, Boik, Newman, 2008; Hennessey, Rosner, Bast, Chen, 2010) and, in some cases, low power to detect synergy. There is a great need for improving the current methodology. We developed a Bayesian framework for dose-response modeling and drug-drug interaction analysis. First, we developed a hierarchical meta-regression dose-response model that accounts for various sources of variation and uncertainty and allows one to incorporate knowledge from prior studies into the current analysis, thus offering a more efficient and reliable inference. Second, in the case that parametric dose-response models do not fit the data, we developed a practical and flexible nonparametric regression method for meta-analysis of independently repeated dose-response experiments. Third, and lastly, we developed a method, based on Loewe additivity that allows one to quantitatively assess interaction between two agents combined at a fixed dose ratio. The proposed method makes a comprehensive and honest account of uncertainty within drug interaction assessment. Extensive simulation studies show that the novel methodology improves the screening process of effective/synergistic agents and reduces the incidence of type I error. We consider an ovarian cancer cell line study that investigates the combined effect of DNA methylation inhibitors and histone deacetylation inhibitors in human ovarian cancer cell lines. The hypothesis is that the combination of DNA methylation inhibitors and histone deacetylation inhibitors will enhance antiproliferative activity in human ovarian cancer cell lines compared to treatment with each inhibitor alone. By applying the proposed Bayesian methodology, in vitro synergy was declared for DNA methylation inhibitor, 5-AZA-2'-deoxycytidine combined with one histone deacetylation inhibitor, suberoylanilide hydroxamic acid or trichostatin A in the cell lines HEY and SKOV3. This suggests potential new epigenetic therapies in cell growth inhibition of ovarian cancer cells.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hypertutorials optimize five features - presentation, learner control, practice, feedback, and elaborative learning resources. Previous research showed graduate students significantly and overwhelmingly preferred Web-based hypertutorials to conventional "Book-on-the-Web" statistics or research design lessons. The current report shows that the source of hypertutorials' superiority in student evaluations of instruction lies in their hypertutorial features. Randomized comparisons between the two methodologies were conducted in two successive iterations of a graduate level health informatics research design and evaluation course. The two versions contained the same text and graphics, but differed in the presence or absence of hypertutorial features: Elaborative learning resources, practice, feedback, and amount of learner control. Students gave high evaluations to both Web-based methodologies, but consistently rated the hypertutorial lessons as superior. Significant differences localized in the hypertutorial subscale that measured student responses to hypertutorial features.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An integrated approach for multi-spectral segmentation of MR images is presented. This method is based on the fuzzy c-means (FCM) and includes bias field correction and contextual constraints over spatial intensity distribution and accounts for the non-spherical cluster's shape in the feature space. The bias field is modeled as a linear combination of smooth polynomial basis functions for fast computation in the clustering iterations. Regularization terms for the neighborhood continuity of intensity are added into the FCM cost functions. To reduce the computational complexity, the contextual regularizations are separated from the clustering iterations. Since the feature space is not isotropic, distance measure adopted in Gustafson-Kessel (G-K) algorithm is used instead of the Euclidean distance, to account for the non-spherical shape of the clusters in the feature space. These algorithms are quantitatively evaluated on MR brain images using the similarity measures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Intensity non-uniformity (bias field) correction, contextual constraints over spatial intensity distribution and non-spherical cluster's shape in the feature space are incorporated into the fuzzy c-means (FCM) for segmentation of three-dimensional multi-spectral MR images. The bias field is modeled by a linear combination of smooth polynomial basis functions for fast computation in the clustering iterations. Regularization terms for the neighborhood continuity of either intensity or membership are added into the FCM cost functions. Since the feature space is not isotropic, distance measures, other than the Euclidean distance, are used to account for the shape and volumetric effects of clusters in the feature space. The performance of segmentation is improved by combining the adaptive FCM scheme with the criteria used in Gustafson-Kessel (G-K) and Gath-Geva (G-G) algorithms through the inclusion of the cluster scatter measure. The performance of this integrated approach is quantitatively evaluated on normal MR brain images using the similarity measures. The improvement in the quality of segmentation obtained with our method is also demonstrated by comparing our results with those produced by FSL (FMRIB Software Library), a software package that is commonly used for tissue classification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose notions of calibration for probabilistic forecasts of general multivariate quantities. Probabilistic copula calibration is a natural analogue of probabilistic calibration in the univariate setting. It can be assessed empirically by checking for the uniformity of the copula probability integral transform (CopPIT), which is invariant under coordinate permutations and coordinatewise strictly monotone transformations of the predictive distribution and the outcome. The CopPIT histogram can be interpreted as a generalization and variant of the multivariate rank histogram, which has been used to check the calibration of ensemble forecasts. Climatological copula calibration is an analogue of marginal calibration in the univariate setting. Methods and tools are illustrated in a simulation study and applied to compare raw numerical model and statistically postprocessed ensemble forecasts of bivariate wind vectors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE To develop internationally harmonised standards for programmes of training in intensive care medicine (ICM). METHODS Standards were developed by using consensus techniques. A nine-member nominal group of European intensive care experts developed a preliminary set of standards. These were revised and refined through a modified Delphi process involving 28 European national coordinators representing national training organisations using a combination of moderated discussion meetings, email, and a Web-based tool for determining the level of agreement with each proposed standard, and whether the standard could be achieved in the respondent's country. RESULTS The nominal group developed an initial set of 52 possible standards which underwent four iterations to achieve maximal consensus. All national coordinators approved a final set of 29 standards in four domains: training centres, training programmes, selection of trainees, and trainers' profiles. Only three standards were considered immediately achievable by all countries, demonstrating a willingness to aspire to quality rather than merely setting a minimum level. Nine proposed standards which did not achieve full consensus were identified as potential candidates for future review. CONCLUSIONS This preliminary set of clearly defined and agreed standards provides a transparent framework for assuring the quality of training programmes, and a foundation for international harmonisation and quality improvement of training in ICM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In industrialisierten Gesellschaften klagen 10–15 % der Bevölkerung über Tagesschläfrigkeit. Neben der Schichtarbeit und der weit verbreiteten, sozial bedingten Schlafinsuffizienz, dürfte auch die zunehmende Zahl von Schlaf-Wachstörungen dazu beitragen. Die Folgen der Schläfrigkeit am Steuer sind Unaufmerksamkeit, „Tunnelblick“ und verlängerte Reaktionszeit. Die Unfälle beim Sekundenschlaf ereignen sich oft bei unverminderter Geschwindigkeit, was zu besonders schweren, und besonders oft zu tödlichen Unfällen führt. In der Schweiz werden zwar gemäß den offiziellen Statistiken lediglich ca. 1.5 % der Verkehrsunfälle durch Einschlafen am Steuer verursacht, was im Vergleich zu einem 10–30 % Anteil in der Fachliteratur massiv unterschätzt erscheint. Die Diskrepanz in den offiziellen statistischen Erhebungen entsteht wohl u. a. dadurch, dass Schläfrigkeit schwer zu erfassen ist. Die Unterschätzung des wahren Problems ist deswegen relevant, weil Gegenmaßnahmen im Straßenbau und die Abklärungen bei fehlbaren Fahrzeuglenkern immer noch zu wenig konsequent verfolgt werden. Zu den Risikofaktoren für schläfrigkeitsbedingte Verkehrsunfälle gehören junges Alter, geringe Fahrerfahrung, männliches Geschlecht, Risikoverhalten, Nachtfahrten, monotone Strecken, lange Fahrdauer, das sozial oder beruflich bedingte Schlafmanko, aber auch Schlaf-Wach-Krankheiten und sedierende Medikamente. Die Risikofaktoren und auch die typischen Merkmale von schläfrigkeitsbedingten Unfällen sind relativ gut bekannt, so dass prophylaktische Gegenmaßnahmen und gezielte Abklärungen von fehlbaren Lenkern möglich wären. Weil jeder Betroffene die Zeichen der Schläfrigkeit rechtzeitig, d. h. vor dem Auftreten eines Sekundenschlafes am Steuer erkennen kann, kommt der Aufklärung aller Verkehrsteilnehmer – und somit auch der Patienten – über das individuelle Risiko und über wirksame Gegenmaßnahmen wie anhalten, Kaffeetrinken und Turboschlaf einschalten, eine ganz besondere Bedeutung zu. Dieses Aufklärungsgespräch soll in der Krankengeschichte unbedingt bei der ersten Konsultation dokumentiert werden, was besonders wichtig ist bei der Verordnung von sedierenden Medikamenten. Bei allen Berufsfahrern mit Tagesschläfrigkeit und bei allen Fahrzeuglenkern, welche bereits einen Unfall erlitten haben, empfehlen wir eine Zuweisung an ein Zentrum für Schlafmedizin, um die Tagesschläfrigkeit zu objektivieren und damit die Compliance des Patienten zu verbessern. Bei uneinsichtigen Patienten hat der Arzt in der Schweiz das Recht, aber nicht die Pflicht, Anzeige bei den Behörden zu erstatten.