972 resultados para Configuration-interaction Method
Resumo:
This study was carried out with the aim of modeling in 2D, in plain strain, the movement of a soft cohesive soil around a pile, in order to enable the determination of stresses resulting along the pile, per unit length. The problem in study fits into the large deformations problem and can be due to landslide, be close of depth excavations, to be near of zones where big loads are applied in the soil, etc. In this study is used an constitutive Elasto-Plastic model with the failure criterion of Mohr-Coulomb to model the soil behavior. The analysis is developed considering the soil in undrained conditions. To the modeling is used the finite element program PLAXIS, which use the Updated Lagrangian - Finite Element Method (UL-FEM). In this work, special attention is given to the soil-pile interaction, where is presented with some detail the formulation of the interface elements and some studies for a better understand of his behavior. It is developed a 2-D model that simulates the effect of depth allowing the study of his influence in the stress distribution around the pile. The results obtained give an important base about how behaves the movement of the soil around a pile, about how work the finite element program PLAXIS and how is the stress distribution around the pile. The analysis demonstrate that the soil-structure interaction modeled with the UL-FEM and interface elements is more appropriate to small deformations problems.
Resumo:
In team sports, the spatial distribution of players on the field is determined by the interaction behavior established at both player and team levels. The distribution patterns observed during a game emerge from specific technical and tactical methods adopted by the teams, and from individual, environmental and task constraints that influence players' behaviour. By understanding how specific patterns of spatial interaction are formed, one can characterize the behavior of the respective teams and players. Thus, in the present work we suggest a novel spatial method for describing teams' spatial interaction behaviour, which results from superimposing the Voronoi diagrams of two competing teams. We considered theoretical patterns of spatial distribution in a well-defined scenario (5 vs 4+ GK played in a field of 20x20m) in order to generate reference values of the variables derived from the superimposed Voronoi diagrams (SVD). These variables were tested in a formal application to empirical data collected from 19 Futsal trials with identical playing settings. Results suggest that it is possible to identify a number of characteristics that can be used to describe players' spatial behavior at different levels, namely the defensive methods adopted by the players.
Resumo:
We use a two-dimensional (2D) elastic free energy to calculate the effective interaction between two circular disks immersed in smectic-C films. For strong homeotropic anchoring, the distortion of the director field caused by the disks generates topological defects that induce an effective interaction between the disks. We use finite elements, with adaptive meshing, to minimize the 2D elastic free energy. The method is shown to be accurate and efficient for inhomogeneities on the length scales set by the disks and the defects, that differ by up to 3 orders of magnitude. We compute the effective interaction between two disk-defect pairs in a simple (linear) configuration. For large disk separations, D, the elastic free energy scales as similar to D-2, confirming the dipolar character of the long-range effective interaction. For small D the energy exhibits a pronounced minimum. The lowest energy corresponds to a symmetrical configuration of the disk-defect pairs, with the inner defect at the mid-point between the disks. The disks are separated by a distance that, is twice the distance of the outer defect from the nearest disk. The latter is identical to the equilibrium distance of a defect nucleated by an isolated disk.
Resumo:
Railway vehicle homologation, with respect to running dynamics, is addressed via dedicated norms. The results required, such as, accelerations and/or wheel-rail contact forces, obtained from experimental tests or simulations, must be available. Multibody dynamics allows the modelling of railway vehicles and their representation in real operations conditions, being the realism of the multibody models greatly influenced by the modelling assumptions. In this paper, two alternative multibody models of the Light Rail Vehicle 2000 (LRV) are constructed and simulated in a realistic railway track scenarios. The vehicle-track interaction compatibility analysis consists of two stages: the use of the simplified method described in the norm "UIC 518-Testing and Approval of Railway Vehicles from the Point of View of their Dynamic Behaviour-Safety-Track Fatigue-Running Behaviour" for decision making; and, visualization inspection of the vehicle motion with respect to the track via dedicated tools for understanding the mechanisms involved.
Resumo:
Consumer-electronics systems are becoming increasingly complex as the number of integrated applications is growing. Some of these applications have real-time requirements, while other non-real-time applications only require good average performance. For cost-efficient design, contemporary platforms feature an increasing number of cores that share resources, such as memories and interconnects. However, resource sharing causes contention that must be resolved by a resource arbiter, such as Time-Division Multiplexing. A key challenge is to configure this arbiter to satisfy the bandwidth and latency requirements of the real-time applications, while maximizing the slack capacity to improve performance of their non-real-time counterparts. As this configuration problem is NP-hard, a sophisticated automated configuration method is required to avoid negatively impacting design time. The main contributions of this article are: 1) An optimal approach that takes an existing integer linear programming (ILP) model addressing the problem and wraps it in a branch-and-price framework to improve scalability. 2) A faster heuristic algorithm that typically provides near-optimal solutions. 3) An experimental evaluation that quantitatively compares the branch-and-price approach to the previously formulated ILP model and the proposed heuristic. 4) A case study of an HD video and graphics processing system that demonstrates the practical applicability of the approach.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
The EM3E Master is an Education Programme supported by the European Commission, the European Membrane Society (EMS), the European Membrane House (EMH), and a large international network of industrial companies, research centres and universities
Resumo:
Both late menarcheal age and low calcium intake (Ca intake) during growth are risk factors for osteoporosis, probably by impairing peak bone mass. We investigated whether lasting gain in areal bone mineral density (aBMD) in response to increased Ca intake varies according to menarcheal age and, conversely, whether Ca intake could influence menarcheal age. In an initial study, 144 prepubertal girls were randomized in a double-blind controlled trial to receive either a Ca supplement (Ca-suppl.) of 850 mg/d or placebo from age 7.9-8.9 yr. Mean aBMD gain determined by dual energy x-ray absorptiometry at six sites (radius metaphysis, radius diaphysis, femoral neck, trochanter, femoral diaphysis, and L2-L4) was significantly (P = 0.004) greater in the Ca-suppl. than in the placebo group (27 vs. 21 mg/cm(2)). In 122 girls followed up, menarcheal age was recorded, and aBMD was determined at 16.4 yr of age. Menarcheal age was lower in the Ca-suppl. than in the placebo group (P = 0.048). Menarcheal age and Ca intake were negatively correlated (r = -0.35; P < 0.001), as were aBMD gains from age 7.9-16.4 yr and menarcheal age at all skeletal sites (range: r = -0.41 to r = -0.22; P < 0.001 to P = 0.016). The positive effect of Ca-suppl. on the mean aBMD gain from baseline remained significantly greater in girls below, but not in those above, the median of menarcheal age (13.0 yr). Early menarcheal age (12.1 +/- 0.5 yr): placebo, 286 +/- 36 mg/cm(2); Ca-suppl., 317 +/- 46 (P = 0.009); late menarcheal age (13.9 +/- 0.5 yr): placebo, 284 +/- 58; Ca-suppl., 276 +/- 50 (P > 0.05). The level of Ca intake during prepuberty may influence the timing of menarche, which, in turn, could influence long-term bone mass gain in response to Ca supplementation. Thus, both determinants of early menarcheal age and high Ca intake may positively interact on bone mineral mass accrual.
Resumo:
We herein present an improved assay for detecting the presence of Trypanosoma cruzi in infected cultures. Using chagasic human sera (CHS), we were able to detect T. cruzi infection in primary cultures of both peritoneal macrophages and heart muscle cells (MHC). To avoid elevated background levels - hitherto observed in all experiments especially in those using HMC - CHS were preincubated with uninfected cells in monolayers or suspensions prior to being used for detection of T. cruzi in infected monolayers. Preincubation with cell suspensions gave better results than with monolayers, reducing background by up to three times and increasing sensitivity by to twenty times. In addition, the continous fibroplastic cell line L929 was shown to be suitable for preadsorption of CHS. These results indicate that the high background levels observed in previous reports may be due to the presence of human autoantibodies that recognize surface and/or extracellular matrix components in cell monolayers. We therefore propose a modified procedure that increases the performance of the ELISA method, making it an useful tool even in cultures that would otherwise be expected to present low levels of infection or high levels of background
Resumo:
We present a novel steered molecular dynamics scheme to induce the dissociation of large protein-protein complexes. We apply this scheme to study the interaction of a T cell receptor (TCR) with a major histocompatibility complex (MHC) presenting a peptide (p). Two TCR-pMHC complexes are considered, which only differ by the mutation of a single amino acid on the peptide; one is a strong agonist that produces T cell activation in vivo, while the other is an antagonist. We investigate the interaction mechanism from a large number of unbinding trajectories by analyzing van der Waals and electrostatic interactions and by computing energy changes in proteins and solvent. In addition, dissociation potentials of mean force are calculated with the Jarzynski identity, using an averaging method developed for our steering scheme. We analyze the convergence of the Jarzynski exponential average, which is hampered by the large amount of dissipative work involved and the complexity of the system. The resulting dissociation free energies largely underestimate experimental values, but the simulations are able to clearly differentiate between wild-type and mutated TCR-pMHC and give insights into the dissociation mechanism.
Resumo:
A new electrical method is proposed for determining the apparent resistivity of multi-earth layers located underwater. The method is based on direct current geoelectric sounding principles. A layered earth model is used to simulate the stratigraphic target. The measurement array is of pole-pole type; it is located underwater and is orientated vertically. This particular electrode configuration is very useful when conventional electrical methods cannot be used, especially if the water depth becomes very important. The calculated apparent resistivity shows a substantial quality increase in the measured signal caused by the underwater targets, from which little or no response is measured using conventional surface electrode methods. In practice, however, different factors such as water stratification, underwater streams or meteorological conditions complicate the interpretation of the field results. A case study is presented, where field surveys carried out on Lake Geneva were interpreted using the calculated apparent resistivity master-curves.
Resumo:
Consumption of nicotine in the form of smokeless tobacco (snus, snuff, chewing tobacco) or nicotine-containing medication (gum, patch) may benefit sport practice. Indeed, use of snus seems to be a growing trend and investigating nicotine consumption amongst professional athletes is of major interest to sport authorities. Thus, a liquid chromatography-tandem mass spectrometry (LC-MS/MS) method for the detection and quantification of nicotine and its principal metabolites cotinine, trans-3-hydroxycotinine, nicotine-N'-oxide and cotinine-N-oxide in urine was developed. Sample preparation was performed by liquid-liquid extraction followed by hydrophilic interaction chromatography-tandem mass spectrometry (HILIC-MS/MS) operated in electrospray positive ionization (ESI) mode with selective reaction monitoring (SRM) data acquisition. The method was validated and calibration curves were linear over the selected concentration ranges of 10-10,000 ng/mL for nicotine, cotinine, trans-3-hydroxycotinine and 10-5000 ng/mL for nicotine-N'-oxide and cotinine-N-oxide, with calculated coefficients of determination (R(2)) greater than 0.95. The total extraction efficiency (%) was concentration dependent and ranged between 70.4 and 100.4%. The lower limit of quantification (LLOQ) for all analytes was 10 ng/mL. Repeatability and intermediate precision were ?9.4 and ?9.9%, respectively. In order to measure the prevalence of nicotine exposure during the 2009 Ice Hockey World Championships, 72 samples were collected and analyzed after the minimum of 3 months storage period and complete removal of identification means as required by the 2009 International Standards for Laboratories (ISL). Nicotine and/or metabolites were detected in every urine sample, while concentration measurements indicated an exposure within the last 3 days for eight specimens out of ten. Concentrations of nicotine, cotinine, trans-3-hydroxycotinine, nicotine-N'-oxide and cotinine-N-oxide were found to range between 11 and 19,750, 13 and 10,475, 10 and 8217, 11 and 3396, and 13 and 1640 ng/mL, respectively. When proposing conservative concentration limits for nicotine consumption prior and/or during the games (50 ng/mL for nicotine, cotinine and trans-3-hydroxycotinine and 25 ng/mL for nicotine-N'-oxide and cotinine-N-oxide), about half of the hockey players were qualified as consumers. These findings significantly support the likelihood of extensive smokeless nicotine consumption. However, since such conclusions can only be hypothesized, the potential use of smokeless tobacco as a doping agent in ice hockey requires further investigation.
Resumo:
Chromatographic separation of highly polar basic drugs with ideal ionspray mass spectrometry volatile mobile phases is a difficult challenge. A new quantification procedure was developed using hydrophilic interaction chromatography-mass spectrometry with turbo-ionspray ionization in the positive mode. After addition of deuterated internal standards and simple clean-up liquid extraction, the dried extracts were reconstituted in 500 microL pure acetonitrile and 5 microL was directly injected onto a Waters Atlantis HILIC 150- x 2.1-mm, 3-microm column. Chromatographic separations of cocaine, seven metabolites, and anhydroecgonine were obtained by linear gradient-elution with decreasing high concentrations of acetonitrile (80-56% in 18 min). This high proportion of organic solvent makes it easier to be coupled with MS. The eluent was buffered with 2 mM ammonium acetate at pH 4.5. Except for m-hydroxy-benzoylecgonine, the within-day and between-day precisions at 20, 100, and 500 ng/mL were below 7 and 19.1%, respectively. Accuracy was also below +/- 13.5% at all tested concentrations. The limit of quantification was 5 ng/mL (%Diff < 16.1, %RSD < 4.3) and the limit of detection below 0.5 ng/mL. This method was successfully applied to a fatal overdose. In Switzerland, cocaine abuse has dramatically increased in the last few years. A 45-year-old man, a known HIV-positive drug user, was found dead at home. According to relatives, cocaine was self-injected about 10 times during the evening before death. A low amount of cocaine (0.45 mg) was detected in the bloody fluid taken from a syringe discovered near the corpse. Besides injection marks, no significant lesions were detected during the forensic autopsy. Toxicological investigations showed high cocaine concentrations in all body fluids and tissues. The peripheral blood concentrations of cocaine, benzoylecgonine, and methylecgonine were 5.0, 10.4, and 4.1 mg/L, respectively. The brain concentrations of cocaine, benzoylecgonine, and methylecgonine were 21.2, 3.8, and 3.3 mg/kg, respectively. The highest concentrations of norcocaine (about 1 mg/L) were measured in bile and urine. Very high levels of cocaine were determined in hair (160 ng/mg), indicating chronic cocaine use. A low concentration of anhydroecgonine methylester was also found in urine (0.65 mg/L) suggesting recent cocaine inhalation. Therapeutic blood concentrations of fluoxetine (0.15 mg/L) and buprenorphine (0.1 microg/L) were also discovered. A relatively high concentration of Delta(9)-THC was measured both in peripheral blood (8.2 microg/L) and brain cortex (13.5 microg/kg), suggesting that the victim was under the influence of cannabis at the time of death. In addition, fluoxetine might have enhanced the toxic effects of cocaine because of its weak pro-arrhythmogenic properties. Likewise, combination of cannabinoids and cocaine might have increase detrimental cardiovascular effects. Altogether, these results indicate a lethal cocaine overdose with a minor contribution of fluoxetine and cannabinoids.
Resumo:
Several methods have been suggested to estimate non-linear models with interaction terms in the presence of measurement error. Structural equation models eliminate measurement error bias, but require large samples. Ordinary least squares regression on summated scales, regression on factor scores and partial least squares are appropriate for small samples but do not correct measurement error bias. Two stage least squares regression does correct measurement error bias but the results strongly depend on the instrumental variable choice. This article discusses the old disattenuated regression method as an alternative for correcting measurement error in small samples. The method is extended to the case of interaction terms and is illustrated on a model that examines the interaction effect of innovation and style of use of budgets on business performance. Alternative reliability estimates that can be used to disattenuate the estimates are discussed. A comparison is made with the alternative methods. Methods that do not correct for measurement error bias perform very similarly and considerably worse than disattenuated regression
Resumo:
In the forensic examination of DNA mixtures, the question of how to set the total number of contributors (N) presents a topic of ongoing interest. Part of the discussion gravitates around issues of bias, in particular when assessments of the number of contributors are not made prior to considering the genotypic configuration of potential donors. Further complication may stem from the observation that, in some cases, there may be numbers of contributors that are incompatible with the set of alleles seen in the profile of a mixed crime stain, given the genotype of a potential contributor. In such situations, procedures that take a single and fixed number contributors as their output can lead to inferential impasses. Assessing the number of contributors within a probabilistic framework can help avoiding such complication. Using elements of decision theory, this paper analyses two strategies for inference on the number of contributors. One procedure is deterministic and focuses on the minimum number of contributors required to 'explain' an observed set of alleles. The other procedure is probabilistic using Bayes' theorem and provides a probability distribution for a set of numbers of contributors, based on the set of observed alleles as well as their respective rates of occurrence. The discussion concentrates on mixed stains of varying quality (i.e., different numbers of loci for which genotyping information is available). A so-called qualitative interpretation is pursued since quantitative information such as peak area and height data are not taken into account. The competing procedures are compared using a standard scoring rule that penalizes the degree of divergence between a given agreed value for N, that is the number of contributors, and the actual value taken by N. Using only modest assumptions and a discussion with reference to a casework example, this paper reports on analyses using simulation techniques and graphical models (i.e., Bayesian networks) to point out that setting the number of contributors to a mixed crime stain in probabilistic terms is, for the conditions assumed in this study, preferable to a decision policy that uses categoric assumptions about N.