912 resultados para simplicity
Resumo:
The biggest challenge facing software developers today is how to gracefully evolve complex software systems in the face of changing requirements. We clearly need software systems to be more dynamic, compositional and model-centric, but instead we continue to build systems that are static, baroque and inflexible. How can we better build change-enabled systems in the future? To answer this question, we propose to look back to one of the most successful systems to support change, namely Smalltalk. We briefly introduce Smalltalk with a few simple examples, and draw some lessons for software evolution. Smalltalk's simplicity, its reflective design, and its highly dynamic nature all go a long way towards enabling change in Smalltalk applications. We then illustrate how these lessons work in practice by reviewing a number of research projects that support software evolution by exploiting Smalltalk's design. We conclude by summarizing open issues and challenges for change-enabled systems of the future.
Resumo:
The use of self-etch primers has increased steadily because of their time savings and greater simplicity; however, overall benefits and potential disadvantages and harms have not been assessed systematically. In this study, we reviewed randomized controlled trials to assess the risk of attachment failure, bonding time, and demineralization adjacent to attachments between 1-stage (self-etch) and 2-stage (acid etch) bonding in orthodontic patients over a minimum follow-up period of 12 months.
Resumo:
We derive a new class of iterative schemes for accelerating the convergence of the EM algorithm, by exploiting the connection between fixed point iterations and extrapolation methods. First, we present a general formulation of one-step iterative schemes, which are obtained by cycling with the extrapolation methods. We, then square the one-step schemes to obtain the new class of methods, which we call SQUAREM. Squaring a one-step iterative scheme is simply applying it twice within each cycle of the extrapolation method. Here we focus on the first order or rank-one extrapolation methods for two reasons, (1) simplicity, and (2) computational efficiency. In particular, we study two first order extrapolation methods, the reduced rank extrapolation (RRE1) and minimal polynomial extrapolation (MPE1). The convergence of the new schemes, both one-step and squared, is non-monotonic with respect to the residual norm. The first order one-step and SQUAREM schemes are linearly convergent, like the EM algorithm but they have a faster rate of convergence. We demonstrate, through five different examples, the effectiveness of the first order SQUAREM schemes, SqRRE1 and SqMPE1, in accelerating the EM algorithm. The SQUAREM schemes are also shown to be vastly superior to their one-step counterparts, RRE1 and MPE1, in terms of computational efficiency. The proposed extrapolation schemes can fail due to the numerical problems of stagnation and near breakdown. We have developed a new hybrid iterative scheme that combines the RRE1 and MPE1 schemes in such a manner that it overcomes both stagnation and near breakdown. The squared first order hybrid scheme, SqHyb1, emerges as the iterative scheme of choice based on our numerical experiments. It combines the fast convergence of the SqMPE1, while avoiding near breakdowns, with the stability of SqRRE1, while avoiding stagnations. The SQUAREM methods can be incorporated very easily into an existing EM algorithm. They only require the basic EM step for their implementation and do not require any other auxiliary quantities such as the complete data log likelihood, and its gradient or hessian. They are an attractive option in problems with a very large number of parameters, and in problems where the statistical model is complex, the EM algorithm is slow and each EM step is computationally demanding.
Resumo:
BACKGROUND: The Anesthetic Conserving Device (AnaConDa) uncouples delivery of a volatile anesthetic (VA) from fresh gas flow (FGF) using a continuous infusion of liquid volatile into a modified heat-moisture exchanger capable of adsorbing VA during expiration and releasing adsorbed VA during inspiration. It combines the simplicity and responsiveness of high FGF with low agent expenditures. We performed in vitro characterization of the device before developing a population pharmacokinetic model for sevoflurane administration with the AnaConDa, and retrospectively testing its performance (internal validation). MATERIALS AND METHODS: Eighteen females and 20 males, aged 31-87, BMI 20-38, were included. The end-tidal concentrations were varied and recorded together with the VA infusion rates into the device, ventilation and demographic data. The concentration-time course of sevoflurane was described using linear differential equations, and the most suitable structural model and typical parameter values were identified. The individual pharmacokinetic parameters were obtained and tested for covariate relationships. Prediction errors were calculated. RESULTS: In vitro studies assessed the contribution of the device to the pharmacokinetic model. In vivo, the sevoflurane concentration-time courses on the patient side of the AnaConDa were adequately described with a two-compartment model. The population median absolute prediction error was 27% (interquartile range 13-45%). CONCLUSION: The predictive performance of the two-compartment model was similar to that of models accepted for TCI administration of intravenous anesthetics, supporting open-loop administration of sevoflurane with the AnaConDa. Further studies will focus on prospective testing and external validation of the model implemented in a target-controlled infusion device.
Resumo:
Inductive-capacitive (LC) resonant circuit sensors are low-cost, wireless, durable, simple to fabricate and battery-less. Consequently, they are well suited to sensing applications in harsh environments or in situations where large numbers of sensors are needed. They are also advantageous in applications where access to the sensor is limited or impossible or when sensors are needed on a disposable basis. Due to their many advantages, LC sensors have been used for sensing a variety of parameters including humidity, temperature, chemical concentrations, pH, stress/pressure, strain, food quality and even biological growth. However, current versions of the LC sensor technology are limited to sensing only one parameter. The purpose of this work is to develop new types of LC sensor systems that are simpler to fabricate (hence lower cost) or capable of monitoring multiple parameters simultaneously. One design presented in this work, referred to as the multi-element LC sensor, is able to measure multiple parameters simultaneously using a second capacitive element. Compared to conventional LC sensors, this design can sense multiple parameters with a higher detection range than two independent sensors while maintaining the same overall sensor footprint. In addition, the two-element sensor does not suffer from interference issues normally encountered while implementing two LC sensors in close proximity. Another design, the single-spiral inductive-capacitive sensor, utilizes the parasitic capacitance of a coil or spring structure to form a single layer LC resonant circuit. Unlike conventional LC sensors, this design is truly planar, thus simplifying its fabrication process and reducing sensor cost. Due to the simplicity of this sensor layout it will be easier and more cost-effective for embedding in common building or packaging materials during manufacturing processes, thereby adding functionality to current products (such as drywall sheets) while having a minor impact on overall unit cost. These modifications to the LC sensor design significantly improve the functionality and commercial feasibility of this technology, especially for applications where a large array of sensors or multiple sensing parameters are required.
Resumo:
Mower is a micro-architecture technique which targets branch misprediction penalties in superscalar processors. It speeds-up the misprediction recovery process by dynamically evicting stale instructions and fixing the RAT (Register Alias Table) using explicit branch dependency tracking. Tracking branch dependencies is accomplished by using simple bit matrices. This low-overhead technique allows overlapping of the recovery process with instruction fetching, renaming and scheduling from the correct path. Our evaluation of the mechanism indicates that it yields performance very close to ideal recovery and provides up to 5% speed-up and 2% reduction in power consumption compared to a traditional recovery mechanism using a reorder buffer and a walker. The simplicity of the mechanism should permit easy implementation of Mower in an actual processor.
Resumo:
BACKGROUND: A fixed cavovarus foot deformity can be associated with anteromedial ankle arthrosis due to elevated medial joint contact stresses. Supramalleolar valgus osteotomies (SMOT) and lateralizing calcaneal osteotomies (LCOT) are commonly used to treat symptoms by redistributing joint contact forces. In a cavovarus model, the effects of SMOT and LCOT on the lateralization of the center of force (COF) and reduction of the peak pressure in the ankle joint were compared. METHODS: A previously published cavovarus model with fixed hindfoot varus was simulated in 10 cadaver specimens. Closing wedge supramalleolar valgus osteotomies 3 cm above the ankle joint level (6 and 11 degrees) and lateral sliding calcaneal osteotomies (5 and 10 mm displacement) were analyzed at 300 N axial static load (half body weight). The COF migration and peak pressure decrease in the ankle were recorded using high-resolution TekScan pressure sensors. RESULTS: A significant lateral COF shift was observed for each osteotomy: 2.1 mm for the 6 degrees (P = .014) and 2.3 mm for the 11 degrees SMOT (P = .010). The 5 mm LCOT led to a lateral shift of 2.0 mm (P = .042) and the 10 mm LCOT to a shift of 3.0 mm (P = .006). Comparing the different osteotomies among themselves no significant differences were recorded. No significant anteroposterior COF shift was seen. A significant peak pressure reduction was recorded for each osteotomy: The SMOT led to a reduction of 29% (P = .033) for the 6 degrees and 47% (P = .003) for the 11 degrees osteotomy, and the LCOT to a reduction of 41% (P = .003) for the 5 mm and 49% (P = .002) for the 10 mm osteotomy. Similar to the COF lateralization no significant differences between the osteotomies were seen. CONCLUSION: LCOT and SMOT significantly reduced anteromedial ankle joint contact stresses in this cavovarus model. The unloading effects of both osteotomies were equivalent. More correction did not lead to significantly more lateralization of the COF or more reduction of peak pressure but a trend was seen. CLINICAL RELEVANCE: In patients with fixed cavovarus feet, both SMOT and LCOT provided equally good redistribution of elevated ankle joint contact forces. Increasing the amount of displacement did not seem to equally improve the joint pressures. The site of osteotomy could therefore be chosen on the basis of surgeon's preference, simplicity, or local factors in case of more complex reconstructions.
Resumo:
High-quality data are essential for veterinary surveillance systems, and their quality can be affected by the source and the method of collection. Data recorded on farms could provide detailed information about the health of a population of animals, but the accuracy of the data recorded by farmers is uncertain. The aims of this study were to evaluate the quality of the data on animal health recorded on 97 Swiss dairy farms, to compare the quality of the data obtained by different recording systems, and to obtain baseline data on the health of the animals on the 97 farms. Data on animal health were collected from the farms for a year. Their quality was evaluated by assessing the completeness and accuracy of the recorded information, and by comparing farmers' and veterinarians' records. The quality of the data provided by the farmers was satisfactory, although electronic recording systems made it easier to trace the animals treated. The farmers tended to record more health-related events than the veterinarians, although this varied with the event considered, and some events were recorded only by the veterinarians. The farmers' attitude towards data collection was positive. Factors such as motivation, feedback, training, and simplicity and standardisation of data collection were important because they influenced the quality of the data.
Resumo:
Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected.
Resumo:
The MQN-mapplet is a Java application giving access to the structure of small molecules in large databases via color-coded maps of their chemical space. These maps are projections from a 42-dimensional property space defined by 42 integer value descriptors called molecular quantum numbers (MQN), which count different categories of atoms, bonds, polar groups, and topological features and categorize molecules by size, rigidity, and polarity. Despite its simplicity, MQN-space is relevant to biological activities. The MQN-mapplet allows localization of any molecule on the color-coded images, visualization of the molecules, and identification of analogs as neighbors on the MQN-map or in the original 42-dimensional MQN-space. No query molecule is necessary to start the exploration, which may be particularly attractive for nonchemists. To our knowledge, this type of interactive exploration tool is unprecedented for very large databases such as PubChem and GDB-13 (almost one billion molecules). The application is freely available for download at www.gdb.unibe.ch.
Resumo:
Context. Planet formation models have been developed during the past years to try to reproduce what has been observed of both the solar system and the extrasolar planets. Some of these models have partially succeeded, but they focus on massive planets and, for the sake of simplicity, exclude planets belonging to planetary systems. However, more and more planets are now found in planetary systems. This tendency, which is a result of radial velocity, transit, and direct imaging surveys, seems to be even more pronounced for low-mass planets. These new observations require improving planet formation models, including new physics, and considering the formation of systems. Aims: In a recent series of papers, we have presented some improvements in the physics of our models, focussing in particular on the internal structure of forming planets, and on the computation of the excitation state of planetesimals and their resulting accretion rate. In this paper, we focus on the concurrent effect of the formation of more than one planet in the same protoplanetary disc and show the effect, in terms of architecture and composition of this multiplicity. Methods: We used an N-body calculation including collision detection to compute the orbital evolution of a planetary system. Moreover, we describe the effect of competition for accretion of gas and solids, as well as the effect of gravitational interactions between planets. Results: We show that the masses and semi-major axes of planets are modified by both the effect of competition and gravitational interactions. We also present the effect of the assumed number of forming planets in the same system (a free parameter of the model), as well as the effect of the inclination and eccentricity damping. We find that the fraction of ejected planets increases from nearly 0 to 8% as we change the number of embryos we seed the system with from 2 to 20 planetary embryos. Moreover, our calculations show that, when considering planets more massive than ~5 M⊕, simulations with 10 or 20 planetary embryos statistically give the same results in terms of mass function and period distribution.
Resumo:
Based on the results from detailed structural and petrological characterisation and on up-scaled laboratory values for sorption and diffusion, blind predictions were made for the STT1 dipole tracer test performed in the Swedish A¨ spo¨ Hard Rock Laboratory. The tracers used were nonsorbing, such as uranine and tritiated water, weakly sorbing 22Na+, 85Sr2 +, 47Ca2 +and more strongly sorbing 86Rb+, 133Ba2 +, 137Cs+. Our model consists of two parts: (1) a flow part based on a 2D-streamtube formalism accounting for the natural background flow field and with an underlying homogeneous and isotropic transmissivity field and (2) a transport part in terms of the dual porosity medium approach which is linked to the flow part by the flow porosity. The calibration of the model was done using the data from one single uranine breakthrough (PDT3). The study clearly showed that matrix diffusion into a highly porous material, fault gouge, had to be included in our model evidenced by the characteristic shape of the breakthrough curve and in line with geological observations. After the disclosure of the measurements, it turned out that, in spite of the simplicity of our model, the prediction for the nonsorbing and weakly sorbing tracers was fairly good. The blind prediction for the more strongly sorbing tracers was in general less accurate. The reason for the good predictions is deemed to be the result of the choice of a model structure strongly based on geological observation. The breakthrough curves were inversely modelled to determine in situ values for the transport parameters and to draw consequences on the model structure applied. For good fits, only one additional fracture family in contact with cataclasite had to be taken into account, but no new transport mechanisms had to be invoked. The in situ values for the effective diffusion coefficient for fault gouge are a factor of 2–15 larger than the laboratory data. For cataclasite, both data sets have values comparable to laboratory data. The extracted Kd values for the weakly sorbing tracers are larger than Swedish laboratory data by a factor of 25–60, but agree within a factor of 3–5 for the more strongly sorbing nuclides. The reason for the inconsistency concerning Kds is the use of fresh granite in the laboratory studies, whereas tracers in the field experiments interact only with fracture fault gouge and to a lesser extent with cataclasite both being mineralogically very different (e.g. clay-bearing) from the intact wall rock.
Resumo:
We present a fluorescence-lifetime based method for monitoring cell and tissue activity in situ, during cell culturing and in the presence of a strong autofluorescence background. The miniature fiber-optic probes are easily incorporated in the tight space of a cell culture chamber or in an endoscope. As a first application we monitored the cytosolic calcium levels in porcine tracheal explant cultures using the Calcium Green-5N (CG5N) indicator. Despite the simplicity of the optical setup we are able to detect changes of calcium concentration as small as 2.5 nM, with a monitoring time resolution of less than 1 s.
Resumo:
Spike timing dependent plasticity (STDP) is a phenomenon in which the precise timing of spikes affects the sign and magnitude of changes in synaptic strength. STDP is often interpreted as the comprehensive learning rule for a synapse - the "first law" of synaptic plasticity. This interpretation is made explicit in theoretical models in which the total plasticity produced by complex spike patterns results from a superposition of the effects of all spike pairs. Although such models are appealing for their simplicity, they can fail dramatically. For example, the measured single-spike learning rule between hippocampal CA3 and CA1 pyramidal neurons does not predict the existence of long-term potentiation one of the best-known forms of synaptic plasticity. Layers of complexity have been added to the basic STDP model to repair predictive failures, but they have been outstripped by experimental data. We propose an alternate first law: neural activity triggers changes in key biochemical intermediates, which act as a more direct trigger of plasticity mechanisms. One particularly successful model uses intracellular calcium as the intermediate and can account for many observed properties of bidirectional plasticity. In this formulation, STDP is not itself the basis for explaining other forms of plasticity, but is instead a consequence of changes in the biochemical intermediate, calcium. Eventually a mechanism-based framework for learning rules should include other messengers, discrete change at individual synapses, spread of plasticity among neighboring synapses, and priming of hidden processes that change a synapse's susceptibility to future change. Mechanism-based models provide a rich framework for the computational representation of synaptic plasticity.
Resumo:
Introduction According to the Swiss Health Survey 2007, 1.7% of the adult population use traditional Chinese medicine (including Chinese herbal medicine, but excluding acupuncture). In contrast to conventional drugs, that contain single chemically defined substances, prescriptions of Chinese herbs are mixtures of up to 40 ingredients (parts of plants, fungi, animal substances and minerals). Originally they were taken in the form of decoctions, but nowadays granules are more popular. Medium daily dosages of granules range between 8 to 12g. In a recent work we identified the most commonly used Chinese herbs (all ingredients are referred to as herbs for reasons of simplicity) and classical formulas (mixtures). Here we present a short overview and the example of suan zao ren (Ziziphi Spinosae Semen), which is used in the treatment of insomnia and anxiety and contains saponins that have been shown to increase sleep in animal studies. Material and Methods A random sample of 1,053 prescriptions was drawn from the database of Lian Chinaherb AG, Switzerland, and analysed according to the most frequently used individual herbs and classical formulas. Cluster analysis (Jaccard similarity coefficient, complete linkage method) was applied to identify common combinations of herbs. Results The most frequently used herbs were dang gui (Angelicae Sinensis Radix), fu ling (Poria), bai shao (Paeoniae Radix Alba), and gan cao (Glycyrrhizae Radix et Rhizoma); the most frequently used classical formulas were gui pi tang (Restore the Spleen Decoction) and xiao yao san (Rambling Powder). The average number of herbs per prescription was 12.0, and the average daily dosage of granules was 8.7g. 74.3% of the prescriptions were for female, 24.8% for male patients. Suan zao ren was present in 14.2% of all prescriptions. These prescriptions contained on average 13.7 herbs, and the daily dosage of granules was 8.9g. Suan zao ren was more frequently prescribed by practitioners of non-Asian than of Asian origin but equally often for female and male patients. Cluster analysis grouped suan zao ren with yuan zhi (Polygalae Radix), bai zi ren (Platycladi Semen), sheng di huang (Rehmanniae Radix) and dan shen (Salviae Miltiorrhizae Radix et Rhizoma). Discussion Prescriptions including suan zao ren contained on average slightly more herbs than other prescriptions. This might be due to the fact that two of the three most popular classical formulas with suan zao ren are composed of 13 and 12 herbs with the possibility of adding more ingredients when necessary. Cluster analysis resulted in the clustering of suan zao ren with other herbs of the classical formula tian wang bu xin dan (Emperor of Heaven’s Special Pill to Tonify the Heart), indicating the use of suan zao ren for the treatment of insomnia and irritability. Unfortunately, the diagnoses of the patients were unavailable and thus correlations between use of suan zao ren and diseases could not be analysed.