907 resultados para Unconstrained minimization


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aging societies suffer from an increasing incidence of bone fractures. Bone strength depends on the amount of mineral measured by clinical densitometry, but also on the micromechanical properties of the bone hierarchical organization. A good understanding has been reached for elastic properties on several length scales, but up to now there is a lack of reliable postyield data on the lower length scales. In order to be able to describe the behavior of bone at the microscale, an anisotropic elastic-viscoplastic damage model was developed using an eccentric generalized Hill criterion and nonlinear isotropic hardening. The model was implemented as a user subroutine in Abaqus and verified using single element tests. A FE simulation of microindentation in lamellar bone was finally performed show-ing that the new constitutive model can capture the main characteristics of the indentation response of bone. As the generalized Hill criterion is limited to elliptical and cylindrical yield surfaces and the correct shape for bone is not known, a new yield surface was developed that takes any convex quadratic shape. The main advantage is that in the case of material identification the shape of the yield surface does not have to be anticipated but a minimization results in the optimal shape among all convex quadrics. The generality of the formulation was demonstrated by showing its degeneration to classical yield surfaces. Also, existing yield criteria for bone at multiple length scales were converted to the quadric formulation. Then, a computational study to determine the influence of yield surface shape and damage on the in-dentation response of bone using spherical and conical tips was performed. The constitutive model was adapted to the quadric criterion and yield surface shape and critical damage were varied. They were shown to have a major impact on the indentation curves. Their influence on indentation modulus, hardness, their ratio as well as the elastic to total work ratio were found to be very well described by multilinear regressions for both tip shapes. For conical tips, indentation depth was not a significant fac-tor, while for spherical tips damage was insignificant. All inverse methods based on microindentation suffer from a lack of uniqueness of the found material properties in the case of nonlinear material behavior. Therefore, monotonic and cyclic micropillar com-pression tests in a scanning electron microscope allowing a straightforward interpretation comple-mented by microindentation and macroscopic uniaxial compression tests were performed on dry ovine bone to identify modulus, yield stress, plastic deformation, damage accumulation and failure mecha-nisms. While the elastic properties were highly consistent, the postyield deformation and failure mech-anisms differed between the two length scales. A majority of the micropillars showed a ductile behavior with strain hardening until failure by localization in a slip plane, while the macroscopic samples failed in a quasi-brittle fashion with microcracks coalescing into macroscopic failure surfaces. In agreement with a proposed rheological model, these experiments illustrate a transition from a ductile mechanical behavior of bone at the microscale to a quasi-brittle response driven by the growth of preexisting cracks along interfaces or in the vicinity of pores at the macroscale. Subsequently, a study was undertaken to quantify the topological variability of indentations in bone and examine its relationship with mechanical properties. Indentations were performed in dry human and ovine bone in axial and transverse directions and their topography measured by AFM. Statistical shape modeling of the residual imprint allowed to define a mean shape and describe the variability with 21 principal components related to imprint depth, surface curvature and roughness. The indentation profile of bone was highly consistent and free of any pile up. A few of the topological parameters, in particular depth, showed significant correlations to variations in mechanical properties, but the cor-relations were not very strong or consistent. We could thus verify that bone is rather homogeneous in its micromechanical properties and that indentation results are not strongly influenced by small de-viations from the ideal case. As the uniaxial properties measured by micropillar compression are in conflict with the current literature on bone indentation, another dissipative mechanism has to be present. The elastic-viscoplastic damage model was therefore extended to viscoelasticity. The viscoelastic properties were identified from macroscopic experiments, while the quasistatic postelastic properties were extracted from micropillar data. It was found that viscoelasticity governed by macroscale properties has very little influence on the indentation curve and results in a clear underestimation of the creep deformation. Adding viscoplasticity leads to increased creep, but hardness is still highly overestimated. It was possible to obtain a reasonable fit with experimental indentation curves for both Berkovich and spherical indenta-tion when abandoning the assumption of shear strength being governed by an isotropy condition. These results remain to be verified by independent tests probing the micromechanical strength prop-erties in tension and shear. In conclusion, in this thesis several tools were developed to describe the complex behavior of bone on the microscale and experiments were performed to identify its material properties. Micropillar com-pression highlighted a size effect in bone due to the presence of preexisting cracks and pores or inter-faces like cement lines. It was possible to get a reasonable fit between experimental indentation curves using different tips and simulations using the constitutive model and uniaxial properties measured by micropillar compression. Additional experimental work is necessary to identify the exact nature of the size effect and the mechanical role of interfaces in bone. Deciphering the micromechanical behavior of lamellar bone and its evolution with age, disease and treatment and its failure mechanisms on several length scales will help preventing fractures in the elderly in the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis, we develop an adaptive framework for Monte Carlo rendering, and more specifically for Monte Carlo Path Tracing (MCPT) and its derivatives. MCPT is attractive because it can handle a wide variety of light transport effects, such as depth of field, motion blur, indirect illumination, participating media, and others, in an elegant and unified framework. However, MCPT is a sampling-based approach, and is only guaranteed to converge in the limit, as the sampling rate grows to infinity. At finite sampling rates, MCPT renderings are often plagued by noise artifacts that can be visually distracting. The adaptive framework developed in this thesis leverages two core strategies to address noise artifacts in renderings: adaptive sampling and adaptive reconstruction. Adaptive sampling consists in increasing the sampling rate on a per pixel basis, to ensure that each pixel value is below a predefined error threshold. Adaptive reconstruction leverages the available samples on a per pixel basis, in an attempt to have an optimal trade-off between minimizing the residual noise artifacts and preserving the edges in the image. In our framework, we greedily minimize the relative Mean Squared Error (rMSE) of the rendering by iterating over sampling and reconstruction steps. Given an initial set of samples, the reconstruction step aims at producing the rendering with the lowest rMSE on a per pixel basis, and the next sampling step then further reduces the rMSE by distributing additional samples according to the magnitude of the residual rMSE of the reconstruction. This iterative approach tightly couples the adaptive sampling and adaptive reconstruction strategies, by ensuring that we only sample densely regions of the image where adaptive reconstruction cannot properly resolve the noise. In a first implementation of our framework, we demonstrate the usefulness of our greedy error minimization using a simple reconstruction scheme leveraging a filterbank of isotropic Gaussian filters. In a second implementation, we integrate a powerful edge aware filter that can adapt to the anisotropy of the image. Finally, in a third implementation, we leverage auxiliary feature buffers that encode scene information (such as surface normals, position, or texture), to improve the robustness of the reconstruction in the presence of strong noise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we solve the uncalibrated photometric stereo problem with lights placed near the scene. We investigate different image formation models and find the one that best fits our observations. Although the devised model is more complex than its far-light counterpart, we show that under a global linear ambiguity the reconstruction is possible up to a rotation and scaling, which can be easily fixed. We also propose a solution for reconstructing the normal map, the albedo, the light positions and the light intensities of a scene given only a sequence of near-light images. This is done in an alternating minimization framework which first estimates both the normals and the albedo, and then the light positions and intensities. We validate our method on real world experiments and show that a near-light model leads to a significant improvement in the surface reconstruction compared to the classic distant illumination case.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we study the problem of blind deconvolution. Our analysis is based on the algorithm of Chan and Wong [2] which popularized the use of sparse gradient priors via total variation. We use this algorithm because many methods in the literature are essentially adaptations of this framework. Such algorithm is an iterative alternating energy minimization where at each step either the sharp image or the blur function are reconstructed. Recent work of Levin et al. [14] showed that any algorithm that tries to minimize that same energy would fail, as the desired solution has a higher energy than the no-blur solution, where the sharp image is the blurry input and the blur is a Dirac delta. However, experimentally one can observe that Chan and Wong's algorithm converges to the desired solution even when initialized with the no-blur one. We provide both analysis and experiments to resolve this paradoxical conundrum. We find that both claims are right. The key to understanding how this is possible lies in the details of Chan and Wong's implementation and in how seemingly harmless choices result in dramatic effects. Our analysis reveals that the delayed scaling (normalization) in the iterative step of the blur kernel is fundamental to the convergence of the algorithm. This then results in a procedure that eludes the no-blur solution, despite it being a global minimum of the original energy. We introduce an adaptation of this algorithm and show that, in spite of its extreme simplicity, it is very robust and achieves a performance comparable to the state of the art.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we devise two novel algorithms for blind deconvolution based on a family of logarithmic image priors. In contrast to recent approaches, we consider a minimalistic formulation of the blind deconvolution problem where there are only two energy terms: a least-squares term for the data fidelity and an image prior based on a lower-bounded logarithm of the norm of the image gradients. We show that this energy formulation is sufficient to achieve the state of the art in blind deconvolution with a good margin over previous methods. Much of the performance is due to the chosen prior. On the one hand, this prior is very effective in favoring sparsity of the image gradients. On the other hand, this prior is non convex. Therefore, solutions that can deal effectively with local minima of the energy become necessary. We devise two iterative minimization algorithms that at each iteration solve convex problems: one obtained via the primal-dual approach and one via majorization-minimization. While the former is computationally efficient, the latter achieves state-of-the-art performance on a public dataset.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Methods for tracking an object have generally fallen into two groups: tracking by detection and tracking through local optimization. The advantage of detection-based tracking is its ability to deal with target appearance and disappearance, but it does not naturally take advantage of target motion continuity during detection. The advantage of local optimization is efficiency and accuracy, but it requires additional algorithms to initialize tracking when the target is lost. To bridge these two approaches, we propose a framework for unified detection and tracking as a time-series Bayesian estimation problem. The basis of our approach is to treat both detection and tracking as a sequential entropy minimization problem, where the goal is to determine the parameters describing a target in each frame. To do this we integrate the Active Testing (AT) paradigm with Bayesian filtering, and this results in a framework capable of both detecting and tracking robustly in situations where the target object enters and leaves the field of view regularly. We demonstrate our approach on a retinal tool tracking problem and show through extensive experiments that our method provides an efficient and robust tracking solution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Well-established methods exist for measuring party positions, but reliable means for estimating intra-party preferences remain underdeveloped. While most efforts focus on estimating the ideal points of individual legislators based on inductive scaling of roll call votes, this data suffers from two problems: selection bias due to unrecorded votes and strong party discipline, which tends to make voting a strategic rather than a sincere indication of preferences. By contrast, legislative speeches are relatively unconstrained, as party leaders are less likely to punish MPs for speaking freely as long as they vote with the party line. Yet, the differences between roll call estimations and text scalings remain essentially unexplored, despite the growing application of statistical analysis of textual data to measure policy preferences. Our paper addresses this lacuna by exploiting a rich feature of the Swiss legislature: on most bills, legislators both vote and speak many times. Using this data, we compare text-based scaling of ideal points to vote-based scaling from a crucial piece of energy legislation. Our findings confirm that text scalings reveal larger intra-party differences than roll calls. Using regression models, we further explain the differences between roll call and text scalings by attributing differences to constituency-level preferences for energy policy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

INTRODUCTION Dexmedetomidine was shown in two European randomized double-blind double-dummy trials (PRODEX and MIDEX) to be non-inferior to propofol and midazolam in maintaining target sedation levels in mechanically ventilated intensive care unit (ICU) patients. Additionally, dexmedetomidine shortened the time to extubation versus both standard sedatives, suggesting that it may reduce ICU resource needs and thus lower ICU costs. Considering resource utilization data from these two trials, we performed a secondary, cost-minimization analysis assessing the economics of dexmedetomidine versus standard care sedation. METHODS The total ICU costs associated with each study sedative were calculated on the basis of total study sedative consumption and the number of days patients remained intubated, required non-invasive ventilation, or required ICU care without mechanical ventilation. The daily unit costs for these three consecutive ICU periods were set to decline toward discharge, reflecting the observed reduction in mean daily Therapeutic Intervention Scoring System (TISS) points between the periods. A number of additional sensitivity analyses were performed, including one in which the total ICU costs were based on the cumulative sum of daily TISS points over the ICU period, and two further scenarios, with declining direct variable daily costs only. RESULTS Based on pooled data from both trials, sedation with dexmedetomidine resulted in lower total ICU costs than using the standard sedatives, with a difference of €2,656 in the median (interquartile range) total ICU costs-€11,864 (€7,070 to €23,457) versus €14,520 (€7,871 to €26,254)-and €1,649 in the mean total ICU costs. The median (mean) total ICU costs with dexmedetomidine compared with those of propofol or midazolam were €1,292 (€747) and €3,573 (€2,536) lower, respectively. The result was robust, indicating lower costs with dexmedetomidine in all sensitivity analyses, including those in which only direct variable ICU costs were considered. The likelihood of dexmedetomidine resulting in lower total ICU costs compared with pooled standard care was 91.0% (72.4% versus propofol and 98.0% versus midazolam). CONCLUSIONS From an economic point of view, dexmedetomidine appears to be a preferable option compared with standard sedatives for providing light to moderate ICU sedation exceeding 24 hours. The savings potential results primarily from shorter time to extubation. TRIAL REGISTRATION ClinicalTrials.gov NCT00479661 (PRODEX), NCT00481312 (MIDEX).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The focal point of this paper is to propose and analyze a P 0 discontinuous Galerkin (DG) formulation for image denoising. The scheme is based on a total variation approach which has been applied successfully in previous papers on image processing. The main idea of the new scheme is to model the restoration process in terms of a discrete energy minimization problem and to derive a corresponding DG variational formulation. Furthermore, we will prove that the method exhibits a unique solution and that a natural maximum principle holds. In addition, a number of examples illustrate the effectiveness of the method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVES The aim of this prospective cohort trial was to perform a cost/time analysis for implant-supported single-unit reconstructions in the digital workflow compared to the conventional pathway. MATERIALS AND METHODS A total of 20 patients were included for rehabilitation with 2 × 20 implant crowns in a crossover study design and treated consecutively each with customized titanium abutments plus CAD/CAM-zirconia-suprastructures (test: digital) and with standardized titanium abutments plus PFM-crowns (control conventional). Starting with prosthetic treatment, analysis was estimated for clinical and laboratory work steps including measure of costs in Swiss Francs (CHF), productivity rates and cost minimization for first-line therapy. Statistical calculations were performed with Wilcoxon signed-rank test. RESULTS Both protocols worked successfully for all test and control reconstructions. Direct treatment costs were significantly lower for the digital workflow 1815.35 CHF compared to the conventional pathway 2119.65 CHF [P = 0.0004]. For subprocess evaluation, total laboratory costs were calculated as 941.95 CHF for the test group and 1245.65 CHF for the control group, respectively [P = 0.003]. The clinical dental productivity rate amounted to 29.64 CHF/min (digital) and 24.37 CHF/min (conventional) [P = 0.002]. Overall, cost minimization analysis exhibited an 18% cost reduction within the digital process. CONCLUSION The digital workflow was more efficient than the established conventional pathway for implant-supported crowns in this investigation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Electricity markets in the United States presently employ an auction mechanism to determine the dispatch of power generation units. In this market design, generators submit bid prices to a regulation agency for review, and the regulator conducts an auction selection in such a way that satisfies electricity demand. Most regulators currently use an auction selection method that minimizes total offer costs ["bid cost minimization" (BCM)] to determine electric dispatch. However, recent literature has shown that this method may not minimize consumer payments, and it has been shown that an alternative selection method that directly minimizes total consumer payments ["payment cost minimization" (PCM)] may benefit social welfare in the long term. The objective of this project is to further investigate the long term benefit of PCM implementation and determine whether it can provide lower costs to consumers. The two auction selection methods are expressed as linear constraint programs and are implemented in an optimization software package. Methodology for game theoretic bidding simulation is developed using EMCAS, a real-time market simulator. Results of a 30-day simulation showed that PCM reduced energy costs for consumers by 12%. However, this result will be cross-checked in the future with two other methods of bid simulation as proposed in this paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a nonparametric model for global cost minimization as a framework for optimal allocation of a firm's output target across multiple locations, taking account of differences in input prices and technologies across locations. This should be useful for firms planning production sites within a country and for foreign direct investment decisions by multi-national firms. Two illustrative examples are included. The first example considers the production location decision of a manufacturing firm across a number of adjacent states of the US. In the other example, we consider the optimal allocation of US and Canadian automobile manufacturers across the two countries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background. Various aspects of sustainability have taken root in the hospital environment; however, decisions to pursue sustainable practices within the framework of a master plan are not fully developed in National Cancer Institute (NCI) -designated cancer centers and subscribing institutions to the Practice Greenhealth (PGH) listserv.^ Methods. This cross sectional study was designed to identify the organizational characteristics each study group pursed to implement sustainability practices, describe the barriers they encountered and reasons behind their choices for undertaking certain sustainability practices. A web-based questionnaire was pilot tested, and then sent out to 64 NCI-designated cancer centers and 1638 subscribing institutions to the PGH listserv.^ Results. Complete responses were received from 39 NCI-designated cancer centers and 58 subscribing institutions to the PGH listserv. NCI-designated cancer centers reported greater progress in integrating sustainability criteria into design and construction projects than hospitals of institutions subscribing to the PHG listserv (p-value = <0.05). Statistically significant differences were also identified between these two study groups in undertaking work life options, conducting energy usage assessments, developing energy conservation and optimization plans, implementing solid waste and hazardous waste minimization programs, using energy efficient vehicles and reporting sustainability progress to external stakeholders. NCI-designated cancer centers were further along in implementing these programs (p-value = <0.05). In comparing the self-identified NCI-designated cancer centers to centers that indicated they were both and NCI and PGH, the later had made greater progress in using their collective buying power to pursue sustainable purchasing practices within the medical community (p-value = <0.05). In both study groups, recycling programs were well developed.^ Conclusions. Employee involvement was viewed as the most important reason for both study groups to pursue recycling initiatives and incorporated environmental criteria into purchasing decisions. A written sustainability commitment did not readily translate into a high percentage that had developed a sustainability master plan. Coordination of sustainability programs through a designated sustainability professional was not being undertaken by a large number of institutions within each study group. This may be due to the current economic downturn or management's attention to the emerging health care legislation being debated in congress. ^ Lifecycle assessments, an element of a carbon footprint, are seen as emerging areas of opportunity for health care institutions that can be used to evaluate the total lifecycle costs of products and services.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Clinical text understanding (CTU) is of interest to health informatics because critical clinical information frequently represented as unconstrained text in electronic health records are extensively used by human experts to guide clinical practice, decision making, and to document delivery of care, but are largely unusable by information systems for queries and computations. Recent initiatives advocating for translational research call for generation of technologies that can integrate structured clinical data with unstructured data, provide a unified interface to all data, and contextualize clinical information for reuse in multidisciplinary and collaborative environment envisioned by CTSA program. This implies that technologies for the processing and interpretation of clinical text should be evaluated not only in terms of their validity and reliability in their intended environment, but also in light of their interoperability, and ability to support information integration and contextualization in a distributed and dynamic environment. This vision adds a new layer of information representation requirements that needs to be accounted for when conceptualizing implementation or acquisition of clinical text processing tools and technologies for multidisciplinary research. On the other hand, electronic health records frequently contain unconstrained clinical text with high variability in use of terms and documentation practices, and without commitmentto grammatical or syntactic structure of the language (e.g. Triage notes, physician and nurse notes, chief complaints, etc). This hinders performance of natural language processing technologies which typically rely heavily on the syntax of language and grammatical structure of the text. This document introduces our method to transform unconstrained clinical text found in electronic health information systems to a formal (computationally understandable) representation that is suitable for querying, integration, contextualization and reuse, and is resilient to the grammatical and syntactic irregularities of the clinical text. We present our design rationale, method, and results of evaluation in processing chief complaints and triage notes from 8 different emergency departments in Houston Texas. At the end, we will discuss significance of our contribution in enabling use of clinical text in a practical bio-surveillance setting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Organisms that are distributed across spatial climate gradients often exhibit adaptive local variations in morphological and physiological traits, but to what extent such gradients shape evolutionary responses is still unclear. Given the strong natural contrast in latitudinal temperature gradients between the North-American Pacific and Atlantic coast, we asked how increases in vertebral number (VN, known as Jordan's Rule) with latitude would differ between Pacific (Atherinops affinis) and Atlantic Silversides (Menidia menidia), two ecologically equivalent and taxonomically similar fishes with similar latitudinal distributions. VN was determined from radiographs of wild-caught adults (genetic + environmental differences) and its genetic basis confirmed by rearing offspring in common garden experiments. Compared to published data on VN variation in M. menidia (a mean increase of 7.0 vertebrae from 32 to 46°N, VN slope = 0.42/lat), the latitudinal VN increase in Pacific Silversides was approximately half as strong (a mean increase of 3.3 vertebrae from 28 to 43°N, VN slope = 0.23/lat). This mimicked the strong Atlantic (1.11°C/lat) versus weak Pacific latitudinal gradient (0.40°C/lat) in median annual sea surface temperature (SST). Importantly, the relationship of VN to SST was not significantly different between the two species (average slope = -0.39 vertebrae/°C), thus suggesting a common thermal dependency of VN in silverside fishes. Our findings provide novel support for the hypothesis that temperature gradients are the ultimate cause of Jordan's Rule, even though its exact adaptive significance remains speculative. A second investigated trait, the mode of sex determination in Atlantic versus Pacific Silversides, revealed patterns that were inconsistent with our expectation: M. menidia displays temperature-dependent sex determination (TSD) at low latitudes, where growing seasons are long or unconstrained, but also a gradual shift to genetic sex determination (GSD) with increasing latitude due to more and more curtailed growing seasons. Sex ratios in A. affinis, on the other hand, were independent of latitude and rearing temperature (indicating GSD), even though growing seasons are thermally unconstrained across most of the geographical distribution of A. affinis. This suggests that additional factors (e.g., longevity) play an important role in shaping the mode of sex determination in silverside fishes.