980 resultados para Dynamic code generation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Physiologic data display is essential to decision making in critical care. Current displays echo first-generation hemodynamic monitors dating to the 1970s and have not kept pace with new insights into physiology or the needs of clinicians who must make progressively more complex decisions about their patients. The effectiveness of any redesign must be tested before deployment. Tools that compare current displays with novel presentations of processed physiologic data are required. Regenerating conventional physiologic displays from archived physiologic data is an essential first step. OBJECTIVES: The purposes of the study were to (1) describe the SSSI (single sensor single indicator) paradigm that is currently used for physiologic signal displays, (2) identify and discuss possible extensions and enhancements of the SSSI paradigm, and (3) develop a general approach and a software prototype to construct such "extended SSSI displays" from raw data. RESULTS: We present Multi Wave Animator (MWA) framework-a set of open source MATLAB (MathWorks, Inc., Natick, MA, USA) scripts aimed to create dynamic visualizations (eg, video files in AVI format) of patient vital signs recorded from bedside (intensive care unit or operating room) monitors. Multi Wave Animator creates animations in which vital signs are displayed to mimic their appearance on current bedside monitors. The source code of MWA is freely available online together with a detailed tutorial and sample data sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monte Carlo (MC) based dose calculations can compute dose distributions with an accuracy surpassing that of conventional algorithms used in radiotherapy, especially in regions of tissue inhomogeneities and surface discontinuities. The Swiss Monte Carlo Plan (SMCP) is a GUI-based framework for photon MC treatment planning (MCTP) interfaced to the Eclipse treatment planning system (TPS). As for any dose calculation algorithm, also the MCTP needs to be commissioned and validated before using the algorithm for clinical cases. Aim of this study is the investigation of a 6 MV beam for clinical situations within the framework of the SMCP. In this respect, all parts i.e. open fields and all the clinically available beam modifiers have to be configured so that the calculated dose distributions match the corresponding measurements. Dose distributions for the 6 MV beam were simulated in a water phantom using a phase space source above the beam modifiers. The VMC++ code was used for the radiation transport through the beam modifiers (jaws, wedges, block and multileaf collimator (MLC)) as well as for the calculation of the dose distributions within the phantom. The voxel size of the dose distributions was 2mm in all directions. The statistical uncertainty of the calculated dose distributions was below 0.4%. Simulated depth dose curves and dose profiles in terms of [Gy/MU] for static and dynamic fields were compared with the corresponding measurements using dose difference and γ analysis. For the dose difference criterion of ±1% of D(max) and the distance to agreement criterion of ±1 mm, the γ analysis showed an excellent agreement between measurements and simulations for all static open and MLC fields. The tuning of the density and the thickness for all hard wedges lead to an agreement with the corresponding measurements within 1% or 1mm. Similar results have been achieved for the block. For the validation of the tuned hard wedges, a very good agreement between calculated and measured dose distributions was achieved using a 1%/1mm criteria for the γ analysis. The calculated dose distributions of the enhanced dynamic wedges (10°, 15°, 20°, 25°, 30°, 45° and 60°) met the criteria of 1%/1mm when compared with the measurements for all situations considered. For the IMRT fields all compared measured dose values agreed with the calculated dose values within a 2% dose difference or within 1 mm distance. The SMCP has been successfully validated for a static and dynamic 6 MV photon beam, thus resulting in accurate dose calculations suitable for applications in clinical cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cardiac myocytes are characterized by distinct structural and functional entities involved in the generation and transmission of the action potential and the excitation-contraction coupling process. Key to their function is the specific organization of ion channels and transporters to and within distinct membrane domains, which supports the anisotropic propagation of the depolarization wave. This review addresses the current knowledge on the molecular actors regulating the distinct trafficking and targeting mechanisms of ion channels in the highly polarized cardiac myocyte. In addition to ubiquitous mechanisms shared by other excitable cells, cardiac myocytes show unique specialization, illustrated by the molecular organization of myocyte-myocyte contacts, e.g., the intercalated disc and the gap junction. Many factors contribute to the specialization of the cardiac sarcolemma and the functional expression of cardiac ion channels, including various anchoring proteins, motors, small GTPases, membrane lipids, and cholesterol. The discovery of genetic defects in some of these actors, leading to complex cardiac disorders, emphasizes the importance of trafficking and targeting of ion channels to cardiac function. A major challenge in the field is to understand how these and other actors work together in intact myocytes to fine-tune ion channel expression and control cardiac excitability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Detailed knowledge of the characteristics of the radiation field shaped by a multileaf collimator (MLC) is essential in intensity modulated radiotherapy (IMRT). A previously developed multiple source model (MSM) for a 6 MV beam was extended to a 15 MV beam and supplemented with an accurate model of an 80-leaf dynamic MLC. Using the supplemented MSM and the MC code GEANT, lateral dose distributions were calculated in a water phantom and a portal water phantom. A field which is normally used for the validation of the step and shoot technique and a field from a realistic IMRT treatment plan delivered with dynamic MLC are investigated. To assess possible spectral changes caused by the modulation of beam intensity by an MLC, the energy spectra in five portal planes were calculated for moving slits of different widths. The extension of the MSM to 15 MV was validated by analysing energy fluences, depth doses and dose profiles. In addition, the MC-calculated primary energy spectrum was verified with an energy spectrum which was reconstructed from transmission measurements. MC-calculated dose profiles using the MSM for the step and shoot case and for the dynamic MLC case are in very good agreement with the measured data from film dosimetry. The investigation of a 13 cm wide field shows an increase in mean photon energy of up to 16% for the 0.25 cm slit compared to the open beam for 6 MV and of up to 6% for 15 MV, respectively. In conclusion, the MSM supplemented with the dynamic MLC has proven to be a powerful tool for investigational and benchmarking purposes or even for dose calculations in IMRT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increased fracture risk has been reported for the adjacent vertebral bodies after vertebroplasty. This increase has been partly attributed to the high Young's modulus of commonly used polymethylmethacrylate (PMMA). Therefore, a compliant bone cement of PMMA with a bulk modulus closer to the apparent modulus of cancellous bone has been produced. This compliant bone cement was achieved by introducing pores in the cement. Due to the reduced failure strength of that porous PMMA cement, cancellous bone augmented with such cement could deteriorate under dynamic loading. The aim of the present study was to assess the potential of acute failure, particle generation and mechanical properties of cancellous bone augmented with this compliant cement in comparison to regular cement. For this purpose, vertebral biopsies were augmented with porous- and regular PMMA bone cement, submitted to dynamic tests and compression to failure. Changes in Young's modulus and height due to dynamic loading were determined. Afterwards, yield strength and Young's modulus were determined by compressive tests to failure and compared to the individual composite materials. No failure occurred and no particle generation could be observed during dynamical testing for both groups. Height loss was significantly higher for the porous cement composite (0.53+/-0.21%) in comparison to the biopsies augmented with regular cement (0.16+/-0.1%). Young's modulus of biopsies augmented with porous PMMA was comparable to cancellous bone or porous cement alone (200-700 MPa). The yield strength of those biopsies (21.1+/-4.1 MPa) was around two times higher than for porous cement alone (11.6+/-3.3 MPa).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the insatiable curiosity of human beings to explore the universe and our solar system, it is essential to benefit from larger propulsion capabilities to execute efficient transfers and carry more scientific equipment. In the field of space trajectory optimization the fundamental advances in using low-thrust propulsion and exploiting the multi-body dynamics has played pivotal role in designing efficient space mission trajectories. The former provides larger cumulative momentum change in comparison with the conventional chemical propulsion whereas the latter results in almost ballistic trajectories with negligible amount of propellant. However, the problem of space trajectory design translates into an optimal control problem which is, in general, time-consuming and very difficult to solve. Therefore, the goal of the thesis is to address the above problem by developing a methodology to simplify and facilitate the process of finding initial low-thrust trajectories in both two-body and multi-body environments. This initial solution will not only provide mission designers with a better understanding of the problem and solution but also serves as a good initial guess for high-fidelity optimal control solvers and increases their convergence rate. Almost all of the high-fidelity solvers enjoy the existence of an initial guess that already satisfies the equations of motion and some of the most important constraints. Despite the nonlinear nature of the problem, it is sought to find a robust technique for a wide range of typical low-thrust transfers with reduced computational intensity. Another important aspect of our developed methodology is the representation of low-thrust trajectories by Fourier series with which the number of design variables reduces significantly. Emphasis is given on simplifying the equations of motion to the possible extent and avoid approximating the controls. These facts contribute to speeding up the solution finding procedure. Several example applications of two and three-dimensional two-body low-thrust transfers are considered. In addition, in the multi-body dynamic, and in particular the restricted-three-body dynamic, several Earth-to-Moon low-thrust transfers are investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many complex and dynamic domains, the ability to generate and then select the appropriate course of action is based on the decision maker's "reading" of the situation--in other words, their ability to assess the situation and predict how it will evolve over the next few seconds. Current theories regarding option generation during the situation assessment and response phases of decision making offer contrasting views on the cognitive mechanisms that support superior performance. The Recognition-Primed Decision-making model (RPD; Klein, 1989) and Take-The-First heuristic (TTF; Johnson & Raab, 2003) suggest that superior decisions are made by generating few options, and then selecting the first option as the final one. Long-Term Working Memory theory (LTWM; Ericsson & Kintsch, 1995), on the other hand, posits that skilled decision makers construct rich, detailed situation models, and that as a result, skilled performers should have the ability to generate more of the available task-relevant options. The main goal of this dissertation was to use these theories about option generation as a way to further the understanding of how police officers anticipate a perpetrator's actions, and make decisions about how to respond, during dynamic law enforcement situations. An additional goal was to gather information that can be used, in the future, to design training based on the anticipation skills, decision strategies, and processes of experienced officers. Two studies were conducted to achieve these goals. Study 1 identified video-based law enforcement scenarios that could be used to discriminate between experienced and less-experienced police officers, in terms of their ability to anticipate the outcome. The discriminating scenarios were used as the stimuli in Study 2; 23 experienced and 26 less-experienced police officers observed temporally-occluded versions of the scenarios, and then completed assessment and response option-generation tasks. The results provided mixed support for the nature of option generation in these situations. Consistent with RPD and TTF, participants typically selected the first-generated option as their final one, and did so during both the assessment and response phases of decision making. Consistent with LTWM theory, participants--regardless of experience level--generated more task-relevant assessment options than task-irrelevant options. However, an expected interaction between experience level and option-relevance was not observed. Collectively, the two studies provide a deeper understanding of how police officers make decisions in dynamic situations. The methods developed and employed in the studies can be used to investigate anticipation and decision making in other critical domains (e.g., nursing, military). The results are discussed in relation to how they can inform future studies of option-generation performance, and how they could be applied to develop training for law enforcement officers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most languages fall into one of two camps: either they adopt a unique, static type system, or they abandon static type-checks for run-time checks. Pluggable types blur this division by (i) making static type systems optional, and (ii) supporting a choice of type systems for reasoning about different kinds of static properties. Dynamic languages can then benefit from static-checking without sacrificing dynamic features or committing to a unique, static type system. But the overhead of adopting pluggable types can be very high, especially if all existing code must be decorated with type annotations before any type-checking can be performed. We propose a practical and pragmatic approach to introduce pluggable type systems to dynamic languages. First of all, only annotated code is type-checked. Second, limited type inference is performed on unannotated code to reduce the number of reported errors. Finally, external annotations can be used to type third-party code. We present Typeplug, a Smalltalk implementation of our framework, and report on experience applying the framework to three different pluggable type systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mainstream IDEs such as Eclipse support developers in managing software projects mainly by offering static views of the source code. Such a static perspective neglects any information about runtime behavior. However, object-oriented programs heavily rely on polymorphism and late-binding, which makes them difficult to understand just based on their static structure. Developers thus resort to debuggers or profilers to study the system's dynamics. However, the information provided by these tools is volatile and hence cannot be exploited to ease the navigation of the source space. In this paper we present an approach to augment the static source perspective with dynamic metrics such as precise runtime type information, or memory and object allocation statistics. Dynamic metrics can leverage the understanding for the behavior and structure of a system. We rely on dynamic data gathering based on aspects to analyze running Java systems. By solving concrete use cases we illustrate how dynamic metrics directly available in the IDE are useful. We also comprehensively report on the efficiency of our approach to gather dynamic metrics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Maintaining object-oriented systems that use inheritance and polymorphism is difficult, since runtime information, such as which methods are actually invoked at a call site, is not visible in the static source code. We have implemented Senseo, an Eclipse plugin enhancing Eclipse's static source views with various dynamic metrics, such as runtime types, the number of objects created, or the amount of memory allocated in particular methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In conventional software applications, synchronization code is typically interspersed with functional code, thereby impacting understandability and maintainability of the code base. At the same time, the synchronization defined statically in the code is not capable of adapting to different runtime situations. We propose a new approach to concurrency control which strictly separates the functional code from the synchronization requirements to be used and which adapts objects to be synchronized dynamically to their environment. First-class synchronization specifications express safety requirements, and a dynamic synchronization system dynamically adapts objects to different runtime situations. We present an overview of a prototype of our approach together with several classical concurrency problems, and we discuss open issues for further research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For broadcasting purposes MIXED REALITY, the combination of real and virtual scene content, has become ubiquitous nowadays. Mixed Reality recording still requires expensive studio setups and is often limited to simple color keying. We present a system for Mixed Reality applications which uses depth keying and provides threedimensional mixing of real and artificial content. It features enhanced realism through automatic shadow computation which we consider a core issue to obtain realism and a convincing visual perception, besides the correct alignment of the two modalities and correct occlusion handling. Furthermore we present a possibility to support placement of virtual content in the scene. Core feature of our system is the incorporation of a TIME-OF-FLIGHT (TOF)-camera device. This device delivers real-time depth images of the environment at a reasonable resolution and quality. This camera is used to build a static environment model and it also allows correct handling of mutual occlusions between real and virtual content, shadow computation and enhanced content planning. The presented system is inexpensive, compact, mobile, flexible and provides convenient calibration procedures. Chroma-keying is replaced by depth-keying which is efficiently performed on the GRAPHICS PROCESSING UNIT (GPU) by the usage of an environment model and the current ToF-camera image. Automatic extraction and tracking of dynamic scene content is herewith performed and this information is used for planning and alignment of virtual content. An additional sustainable feature is that depth maps of the mixed content are available in real-time, which makes the approach suitable for future 3DTV productions. The presented paper gives an overview of the whole system approach including camera calibration, environment model generation, real-time keying and mixing of virtual and real content, shadowing for virtual content and dynamic object tracking for content planning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Mechanical unloading of failing hearts can trigger functional recovery but results in progressive atrophy and possibly detrimental adaptation. In an unbiased approach, we examined the dynamic effects of unloading duration on molecular markers indicative of myocardial damage, hypothesizing that potential recovery may be improved by optimized unloading time. METHODS Heterotopically transplanted normal rat hearts were harvested at 3, 8, 15, 30, and 60 days. Forty-seven genes were analyzed using TaqMan-based microarray, Western blot, and immunohistochemistry. RESULTS In parallel with marked atrophy (22% to 64% volume loss at 3 respectively 60 days), expression of myosin heavy-chain isoforms (MHC-α/-β) was characteristically switched in a time-dependent manner. Genes involved in tissue remodeling (FGF-2, CTGF, TGFb, IGF-1) were increasingly upregulated with duration of unloading. A distinct pattern was observed for genes involved in generation of contractile force; an indiscriminate early downregulation was followed by a new steady-state below normal. For pro-apoptotic transcripts bax, bnip-3, and cCasp-6 and -9 mRNA levels demonstrated a slight increase up to 30 days unloading with pronunciation at 60 days. Findings regarding cell death were confirmed on the protein level. Proteasome activity indicated early increase of protein degradation but decreased below baseline in unloaded hearts at 60 days. CONCLUSIONS We identified incrementally increased apoptosis after myocardial unloading of the normal rat heart, which is exacerbated at late time points (60 days) and inversely related to loss of myocardial mass. Our findings suggest an irreversible detrimental effect of long-term unloading on myocardium that may be precluded by partial reloading and amenable to molecular therapeutic intervention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Virtual worlds have moved from being a geek topic to one of mainstream academic interest. This transition is contingent not only on the augmented economic, societal and cultural value of these virtual realities and their effect upon real life but also on their convenience as fields for experimentation, for testing models and paradigms. User creation is however not something that has been transplanted from the real to the virtual world but a phenomenon and a dynamic process that happens from within and is defined through complex relationships between commercial and non-commercial, commodified and not commodified, individual and of the community, amateur and professional, art and not art. Accounting for this complex environment, the present paper explores user created content in virtual worlds, its dimensions and value and above all, its constraints by code and law. It puts forward suggestions for better understanding and harnessing this creativity.