971 resultados para three-shell model
Resumo:
BACKGROUND: The aim of our study was to assess the feasibility of minimally invasive digestive anastomosis using a modular flexible magnetic anastomotic device made up of a set of two flexible chains of magnetic elements. The assembly possesses a non-deployed linear configuration which allows it to be introduced through a dedicated small-sized applicator into the bowel where it takes the deployed form. A centering suture allows the mating between the two parts to be controlled in order to include the viscerotomy between the two magnetic rings and the connected viscera. METHODS AND PROCEDURES: Eight pigs were involved in a 2-week survival experimental study. In five colorectal anastomoses, the proximal device was inserted by a percutaneous endoscopic technique, and the colon was divided below the magnet. The distal magnet was delivered transanally to connect with the proximal magnet. In three jejunojejunostomies, the first magnetic chain was injected in its linear configuration through a small enterotomy. Once delivered, the device self-assembled into a ring shape. A second magnet was injected more distally through the same port. The centering sutures were tied together extracorporeally and, using a knot pusher, magnets were connected. Ex vivo strain testing to determine the compression force delivered by the magnetic device, burst pressure of the anastomosis, and histology were performed. RESULTS: Mean operative time including endoscopy was 69.2 ± 21.9 min, and average time to full patency was 5 days for colorectal anastomosis. Operative times for jejunojejunostomies were 125, 80, and 35 min, respectively. The postoperative period was uneventful. Burst pressure of all anastomoses was ≥ 110 mmHg. Mean strain force to detach the devices was 6.1 ± 0.98 and 12.88 ± 1.34 N in colorectal and jejunojejunal connections, respectively. Pathology showed a mild-to-moderate inflammation score. CONCLUSIONS: The modular magnetic system showed enormous potential to create minimally invasive digestive anastomoses, and may represent an alternative to stapled anastomoses, being easy to deliver, effective, and low cost.
Resumo:
Time-lapse geophysical measurements are widely used to monitor the movement of water and solutes through the subsurface. Yet commonly used deterministic least squares inversions typically suffer from relatively poor mass recovery, spread overestimation, and limited ability to appropriately estimate nonlinear model uncertainty. We describe herein a novel inversion methodology designed to reconstruct the three-dimensional distribution of a tracer anomaly from geophysical data and provide consistent uncertainty estimates using Markov chain Monte Carlo simulation. Posterior sampling is made tractable by using a lower-dimensional model space related both to the Legendre moments of the plume and to predefined morphological constraints. Benchmark results using cross-hole ground-penetrating radar travel times measurements during two synthetic water tracer application experiments involving increasingly complex plume geometries show that the proposed method not only conserves mass but also provides better estimates of plume morphology and posterior model uncertainty than deterministic inversion results.
Resumo:
Intensification of agricultural production without a sound management and regulations can lead to severe environmental problems, as in Western Santa Catarina State, Brazil, where intensive swine production has caused large accumulations of manure and consequently water pollution. Natural resource scientists are asked by decision-makers for advice on management and regulatory decisions. Distributed environmental models are useful tools, since they can be used to explore consequences of various management practices. However, in many areas of the world, quantitative data for model calibration and validation are lacking. The data-intensive distributed environmental model AgNPS was applied in a data-poor environment, the upper catchment (2,520 ha) of the Ariranhazinho River, near the city of Seara, in Santa Catarina State. Steps included data preparation, cell size selection, sensitivity analysis, model calibration and application to different management scenarios. The model was calibrated based on a best guess for model parameters and on a pragmatic sensitivity analysis. The parameters were adjusted to match model outputs (runoff volume, peak runoff rate and sediment concentration) closely with the sparse observed data. A modelling grid cell resolution of 150 m adduced appropriate and computer-fit results. The rainfall runoff response of the AgNPS model was calibrated using three separate rainfall ranges (< 25, 25-60, > 60 mm). Predicted sediment concentrations were consistently six to ten times higher than observed, probably due to sediment trapping along vegetated channel banks. Predicted N and P concentrations in stream water ranged from just below to well above regulatory norms. Expert knowledge of the area, in addition to experience reported in the literature, was able to compensate in part for limited calibration data. Several scenarios (actual, recommended and excessive manure applications, and point source pollution from swine operations) could be compared by the model, using a relative ranking rather than quantitative predictions.
Resumo:
As reflection on the education in the Escola Elisava and of the design education in general, the intervention tries to treat the debates that for many years had existed tacitly between the different academic classes, showing the critical situation that it supposes - especially for the fragile design - that one any of these classes influences over the others. From middle of the 90s, the progressive adoption generalized in the higher education of the Anglo-Saxon model ¿with clear predominance of the institutional thing¿ has showed the indicative character of the critique that here is exhibited.
Resumo:
[cat] En aquest treball es presenta un model eclèctic que sistematitza la dinàmica de les crisis que s’autoconfimen, usant els principals aspectes de les tres tipologies dels models de crisis canviàries de tercera generació, amb la finalitat de descriure els fets que precipiten la renúncia al manteniment d’una paritat fixada. Les contribucions més notables són les implicacions per a la política econòmica, així com la pèrdua del paper del tipus de canvi com instrument d’ajust macroeconòmic, quan els efectes de balanç són una possibilitat real.
Resumo:
BACKGROUND: Qualitative frameworks, especially those based on the logical discrete formalism, are increasingly used to model regulatory and signalling networks. A major advantage of these frameworks is that they do not require precise quantitative data, and that they are well-suited for studies of large networks. While numerous groups have developed specific computational tools that provide original methods to analyse qualitative models, a standard format to exchange qualitative models has been missing. RESULTS: We present the Systems Biology Markup Language (SBML) Qualitative Models Package ("qual"), an extension of the SBML Level 3 standard designed for computer representation of qualitative models of biological networks. We demonstrate the interoperability of models via SBML qual through the analysis of a specific signalling network by three independent software tools. Furthermore, the collective effort to define the SBML qual format paved the way for the development of LogicalModel, an open-source model library, which will facilitate the adoption of the format as well as the collaborative development of algorithms to analyse qualitative models. CONCLUSIONS: SBML qual allows the exchange of qualitative models among a number of complementary software tools. SBML qual has the potential to promote collaborative work on the development of novel computational approaches, as well as on the specification and the analysis of comprehensive qualitative models of regulatory and signalling networks.
Resumo:
We study the nonequilibrium behavior of the three-dimensional Gaussian random-field Ising model at T=0 in the presence of a uniform external field using a two-spin-flip dynamics. The deterministic, history-dependent evolution of the system is compared with the one obtained with the standard one-spin-flip dynamics used in previous studies of the model. The change in the dynamics yields a significant suppression of coercivity, but the distribution of avalanches (in number and size) stays remarkably similar, except for the largest ones that are responsible for the jump in the saturation magnetization curve at low disorder in the thermodynamic limit. By performing a finite-size scaling study, we find strong evidence that the change in the dynamics does not modify the universality class of the disorder-induced phase transition.
Resumo:
A model for the study of hysteresis and avalanches in a first-order phase transition from a single variant phase to a multivariant phase is presented. The model is based on a modification of the random-field Potts model with metastable dynamics by adding a dipolar interaction term truncated at nearest neighbors. We focus our study on hysteresis loop properties, on the three-dimensional microstructure formation, and on avalanche statistics.
Resumo:
Isotopic and isotonic chains of superheavy nuclei are analyzed to search for spherical double shell closures beyond Z=82 and N=126 within the new effective field theory model of Furnstahl, Serot, and Tang for the relativistic nuclear many-body problem. We take into account several indicators to identify the occurrence of possible shell closures, such as two-nucleon separation energies, two-nucleon shell gaps, average pairing gaps, and the shell correction energy. The effective Lagrangian model predicts N=172 and Z=120 and N=258 and Z=120 as spherical doubly magic superheavy nuclei, whereas N=184 and Z=114 show some magic character depending on the parameter set. The magicity of a particular neutron (proton) number in the analyzed mass region is found to depend on the number of protons (neutrons) present in the nucleus.
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
Toxicokinetic modeling is a useful tool to describe or predict the behavior of a chemical agent in the human or animal organism. A general model based on four compartments was developed in a previous study in order to quantify the effect of human variability on a wide range of biological exposure indicators. The aim of this study was to adapt this existing general toxicokinetic model to three organic solvents, which were methyl ethyl ketone, 1-methoxy-2-propanol and 1,1,1,-trichloroethane, and to take into account sex differences. We assessed in a previous human volunteer study the impact of sex on different biomarkers of exposure corresponding to the three organic solvents mentioned above. Results from that study suggested that not only physiological differences between men and women but also differences due to sex hormones levels could influence the toxicokinetics of the solvents. In fact the use of hormonal contraceptive had an effect on the urinary levels of several biomarkers, suggesting that exogenous sex hormones could influence CYP2E1 enzyme activity. These experimental data were used to calibrate the toxicokinetic models developed in this study. Our results showed that it was possible to use an existing general toxicokinetic model for other compounds. In fact, most of the simulation results showed good agreement with the experimental data obtained for the studied solvents, with a percentage of model predictions that lies within the 95% confidence interval varying from 44.4 to 90%. Results pointed out that for same exposure conditions, men and women can show important differences in urinary levels of biological indicators of exposure. Moreover, when running the models by simulating industrial working conditions, these differences could even be more pronounced. In conclusion, a general and simple toxicokinetic model, adapted for three well known organic solvents, allowed us to show that metabolic parameters can have an important impact on the urinary levels of the corresponding biomarkers. These observations give evidence of an interindividual variablity, an aspect that should have its place in the approaches for setting limits of occupational exposure.
Resumo:
With the advancement of high-throughput sequencing and dramatic increase of available genetic data, statistical modeling has become an essential part in the field of molecular evolution. Statistical modeling results in many interesting discoveries in the field, from detection of highly conserved or diverse regions in a genome to phylogenetic inference of species evolutionary history Among different types of genome sequences, protein coding regions are particularly interesting due to their impact on proteins. The building blocks of proteins, i.e. amino acids, are coded by triples of nucleotides, known as codons. Accordingly, studying the evolution of codons leads to fundamental understanding of how proteins function and evolve. The current codon models can be classified into three principal groups: mechanistic codon models, empirical codon models and hybrid ones. The mechanistic models grasp particular attention due to clarity of their underlying biological assumptions and parameters. However, they suffer from simplified assumptions that are required to overcome the burden of computational complexity. The main assumptions applied to the current mechanistic codon models are (a) double and triple substitutions of nucleotides within codons are negligible, (b) there is no mutation variation among nucleotides of a single codon and (c) assuming HKY nucleotide model is sufficient to capture essence of transition- transversion rates at nucleotide level. In this thesis, I develop a framework of mechanistic codon models, named KCM-based model family framework, based on holding or relaxing the mentioned assumptions. Accordingly, eight different models are proposed from eight combinations of holding or relaxing the assumptions from the simplest one that holds all the assumptions to the most general one that relaxes all of them. The models derived from the proposed framework allow me to investigate the biological plausibility of the three simplified assumptions on real data sets as well as finding the best model that is aligned with the underlying characteristics of the data sets. -- Avec l'avancement de séquençage à haut débit et l'augmentation dramatique des données géné¬tiques disponibles, la modélisation statistique est devenue un élément essentiel dans le domaine dé l'évolution moléculaire. Les résultats de la modélisation statistique dans de nombreuses découvertes intéressantes dans le domaine de la détection, de régions hautement conservées ou diverses dans un génome de l'inférence phylogénétique des espèces histoire évolutive. Parmi les différents types de séquences du génome, les régions codantes de protéines sont particulièrement intéressants en raison de leur impact sur les protéines. Les blocs de construction des protéines, à savoir les acides aminés, sont codés par des triplets de nucléotides, appelés codons. Par conséquent, l'étude de l'évolution des codons mène à la compréhension fondamentale de la façon dont les protéines fonctionnent et évoluent. Les modèles de codons actuels peuvent être classés en trois groupes principaux : les modèles de codons mécanistes, les modèles de codons empiriques et les hybrides. Les modèles mécanistes saisir une attention particulière en raison de la clarté de leurs hypothèses et les paramètres biologiques sous-jacents. Cependant, ils souffrent d'hypothèses simplificatrices qui permettent de surmonter le fardeau de la complexité des calculs. Les principales hypothèses retenues pour les modèles actuels de codons mécanistes sont : a) substitutions doubles et triples de nucleotides dans les codons sont négligeables, b) il n'y a pas de variation de la mutation chez les nucléotides d'un codon unique, et c) en supposant modèle nucléotidique HKY est suffisant pour capturer l'essence de taux de transition transversion au niveau nucléotidique. Dans cette thèse, je poursuis deux objectifs principaux. Le premier objectif est de développer un cadre de modèles de codons mécanistes, nommé cadre KCM-based model family, sur la base de la détention ou de l'assouplissement des hypothèses mentionnées. En conséquence, huit modèles différents sont proposés à partir de huit combinaisons de la détention ou l'assouplissement des hypothèses de la plus simple qui détient toutes les hypothèses à la plus générale qui détend tous. Les modèles dérivés du cadre proposé nous permettent d'enquêter sur la plausibilité biologique des trois hypothèses simplificatrices sur des données réelles ainsi que de trouver le meilleur modèle qui est aligné avec les caractéristiques sous-jacentes des jeux de données. Nos expériences montrent que, dans aucun des jeux de données réelles, tenant les trois hypothèses mentionnées est réaliste. Cela signifie en utilisant des modèles simples qui détiennent ces hypothèses peuvent être trompeuses et les résultats de l'estimation inexacte des paramètres. Le deuxième objectif est de développer un modèle mécaniste de codon généralisée qui détend les trois hypothèses simplificatrices, tandis que d'informatique efficace, en utilisant une opération de matrice appelée produit de Kronecker. Nos expériences montrent que sur un jeux de données choisis au hasard, le modèle proposé de codon mécaniste généralisée surpasse autre modèle de codon par rapport à AICc métrique dans environ la moitié des ensembles de données. En outre, je montre à travers plusieurs expériences que le modèle général proposé est biologiquement plausible.
Resumo:
BACKGROUND: Walk-in centres may improve access to healthcare for some patients, due to their convenient location and extensive opening hours, with no need for an appointment. Herein, we describe and assess a new model of walk-in centre, characterised by care provided by residents and supervision achieved by experienced family doctors. The main aim of the study was to assess patients' satisfaction about the care they received from residents and their supervision by family doctors. The secondary aim was to describe walk-in patients' demographic characteristics and to identify potential associations with satisfaction. METHODS: The study was conducted in the walk-in centre of Lausanne. Patients who consulted between 11th and 31st April were automatically included and received a questionnaire in French. We used a five-point Likert scale, ranging from "not at all satisfied" to "very satisfied", converted from values of 1 to 5. We focused on the satisfaction regarding residents' care and supervision by a family doctor. The former was divided in three categories: "Skills", "Treatment" and "Behaviour". A mean satisfaction score was calculated for each category and a multivariable logistic model was applied in order to identify associations with patients' demographics. RESULTS: The overall response rate was 47% [184/395]. Walk-in patients were more likely to be women (62%), young (median age 31), with a high education level (40% of University degree or equivalent). Patients were "very satisfied" with residents' care, with a median satisfaction score between 4.5 and 5, for each category. Over 90% of patients were "satisfied" or "very satisfied" that a family doctor was involved in the consultation. Age showed the greatest association with satisfaction. CONCLUSION: Patients were highly satisfied with care provided by residents and with the involvement of a family doctor in the consultation. Older age showed the greatest positive association with satisfaction with a positive impact. The high level satisfaction reported by walk-in patients supports this new model of walk-in centre.
Resumo:
Abstract Electrical stimulation is a new way to treat digestive disorders such as constipation. Colonic propulsive activity can be triggered by battery operated devices. This study aimed to demonstrate the effect of direct electrical colonic stimulation on mean transit time in a chronic porcine model. The impact of stimulation and implanted material on the colonic wall was also assessed. Three pairs of electrodes were implanted into the caecal wall of 12 anaesthetized pigs. Reference colonic transit time was determined by radiopaque markers for each pig before implantation. It was repeated 4 weeks after implantation with sham stimulation and 5 weeks after implantation with electrical stimulation. Aboral sequential trains of 1-ms pulse width (10 V; 120 Hz) were applied twice daily for 6 days, using an external battery operated stimulator. For each course of markers, a mean value was computed from transit times obtained from individual pig. Microscopic examination of the caecum was routinely performed after animal sacrifice. A reduction of mean transit time was observed after electrical stimulation (19 +/- 13 h; mean +/- SD) when compared to reference (34 +/- 7 h; P = 0.045) and mean transit time after sham stimulation (36 +/- 9 h; P = 0.035). Histological examination revealed minimal chronic inflammation around the electrodes. Colonic transit time measured in a chronic porcine model is reduced by direct sequential electrical stimulation. Minimal tissue lesion is elicited by stimulation or implanted material. Electrical colonic stimulation could be a promising approach to treat specific disorders of the large bowel.
Resumo:
We propose a short-range generalization of the p-spin interaction spin-glass model. The model is well suited to test the idea that an entropy collapse is at the bottom line of the dynamical singularity encountered in structural glasses. The model is studied in three dimensions through Monte Carlo simulations, which put in evidence fragile glass behavior with stretched exponential relaxation and super-Arrhenius behavior of the relaxation time. Our data are in favor of a Vogel-Fulcher behavior of the relaxation time, related to an entropy collapse at the Kauzmann temperature. We, however, encounter difficulties analogous to those found in experimental systems when extrapolating thermodynamical data at low temperatures. We study the spin-glass susceptibility, investigating the behavior of the correlation length in the system. We find that the increase of the relaxation time is accompanied by a very slow growth of the correlation length. We discuss the scaling properties of off-equilibrium dynamics in the glassy regime, finding qualitative agreement with the mean-field theory.