884 resultados para Problem analysis
Resumo:
rnThis thesis is on the flavor problem of Randall Sundrum modelsrnand their strongly coupled dual theories. These models are particularly wellrnmotivated extensions of the Standard Model, because they simultaneously address rntherngauge hierarchy problem and the hierarchies in the quarkrnmasses and mixings. In order to put this into context, special attention is given to concepts underlying therntheories which can explain the hierarchy problem and the flavor structure of the Standard Model (SM). ThernAdS/CFTrnduality is introduced and its implications for the Randall Sundrum model withrnfermions in the bulk andrngeneral bulk gauge groups is investigated. It will be shown that the differentrnterms in the general 5D propagator of a bulk gauge field can be related tornthe corresponding diagrams of the strongly coupled dual, which allows for arndeeperrnunderstanding of the origin of flavor changing neutral currents generated by thernexchange of the Kaluza Klein excitations of these bulk fields.rnIn the numerical analysis, different observables which are sensitive torncorrections from therntree-levelrnexchange of these resonances will be presented on the basis of updatedrnexperimental data from the Tevatron and LHC experiments. This includesrnelectroweak precision observables, namely corrections to the S and Trnparameters followed by corrections to the Zbb vertex, flavor changingrnobservables with flavor changes at one vertex, viz. BR (Bd -> mu+mu-) and BR (Bs -> mu+mu-), and two vertices,rn viz. S_psiphi and |eps_K|, as well as bounds from direct detectionrnexperiments. rnThe analysis will show that all of these bounds can be brought in agreement withrna new physics scale Lambda_NP in the TeV range, except for the CPrnviolating quantity |eps_K|, which requires Lambda_NP= Ord(10) TeVrnin the absencernof fine-tuning. The numerous modifications of the Randall Sundrum modelrnin the literature, which try to attenuate this bound are reviewed andrncategorized.rnrnSubsequently, a novel solution to this flavor problem, based on an extendedrncolor gauge group in the bulk and its thorough implementation inrnthe RS model, will be presented, as well as an analysis of the observablesrnmentioned above in the extended model. This solution is especially motivatedrnfromrnthe point of view of the strongly coupled dual theory and the implications forrnstrongly coupled models of new physics, which do not possess a holographic dual,rnare examined.rnFinally, the top quark plays a special role in models with a geometric explanation ofrnflavor hierarchies and the predictions in the Randall-Sundrum model with andrnwithout the proposed extension for the forward-backward asymmetryrnA_FB^trnin top pair production are computed.
Resumo:
Protein-adsorption occurs immediately following implantation of biomaterials. It is unknown at which extent protein-adsorption impacts the cellular events at bone-implant interface. To investigate this question, we compared the in-vitro outcome of osteoblastic cells grown onto titanium substrates and glass as control, by modulating the exposure to serum-derived proteins. Substrates consisted of 1) polished titanium disks; 2) polished disks nanotextured with H2SO4/H2O2; 3) glass. In the pre-adsorption phase, substrates were treated for 1h with αMEM alone (M-noFBS) or supplemented with 10%-foetal-bovine-serum (M-FBS). MC3T3-osteoblastic-cells were cultured on the pre-treated substrates for 3h and 24h, in M-noFBS and M-FBS. Subsequently, the culture medium was replaced with M-FBS and cultures maintained for 3 and 7days. Cell-number was evaluated by: Alamar-Blue and MTT assay. Mitotic- and osteogenic-activities were evaluated through fluorescence-optical-microscope by immunolabeling for Ki-67 nuclear-protein and Osteopontin. Cellular morphology was evaluated by SEM-imaging. Data were statistically analyzed using ANOVA-test, (p<0.05). At day3 and day7, the presence or absence of serum-derived proteins during the pre-adsorption phase had not significant effect on cell-number. Only the absence of FBS during 24h of culture significantly affected cell-number (p<0.0001). Titanium surfaces performed better than glass, (p<0.01). The growth rate of cells between day3 and 7 was not affected by the initial absence of FBS. Immunolabeling for Ki-67 and Osteopontin showed that the mitotic- and osteogenic- activity were ongoing at 72h. SEM-analysis revealed that the absence of FBS had no major influence on cell-shape. • Physico-chemical interactions without mediation by proteins are sufficient to sustain the initial phase of culture and guide osteogenic-cells toward differentiation. • The challenge is avoiding adsorption of ‘undesirables’ molecules that negatively impact on the cueing cells receive from surface. This may not be a problem in healthy patients, but may have an important role in medically-compromised-individuals in whom the composition of tissue-fluids is altered.
Resumo:
Shell structure is widely used in engineering area. The purpose of this dissertation is to show the behavior of a thin shell under external load, especially for long cylindrical shell under compressive load, I analyzed not only for linear elastic problem and also for buckling problem, and by using finite element analysis it shows that the imperfection of a cylinder could affect the critical load which means the buckling capability of this cylinder. For linear elastic problem, I compared the theoretical results with the results got from Straus7 and Abaqus, and the results are really close. For the buckling problem I did the same: compared the theoretical and Abaqus results, the error is less than 1%, but in reality, it’s not possible to reach the theoretical buckling capability due to the imperfection of the cylinder, so I put different imperfection for the cylinder in Abaqus, and found out that with the increasing of the percentage of imperfection, the buckling capability decreases, for example 10% imperfection could decrease 40% of the buckling capability, and the outcome meet the buckling behavior in reality.
Resumo:
In the past two decades the work of a growing portion of researchers in robotics focused on a particular group of machines, belonging to the family of parallel manipulators: the cable robots. Although these robots share several theoretical elements with the better known parallel robots, they still present completely (or partly) unsolved issues. In particular, the study of their kinematic, already a difficult subject for conventional parallel manipulators, is further complicated by the non-linear nature of cables, which can exert only efforts of pure traction. The work presented in this thesis therefore focuses on the study of the kinematics of these robots and on the development of numerical techniques able to address some of the problems related to it. Most of the work is focused on the development of an interval-analysis based procedure for the solution of the direct geometric problem of a generic cable manipulator. This technique, as well as allowing for a rapid solution of the problem, also guarantees the results obtained against rounding and elimination errors and can take into account any uncertainties in the model of the problem. The developed code has been tested with the help of a small manipulator whose realization is described in this dissertation together with the auxiliary work done during its design and simulation phases.
Resumo:
This work is focused on the study of saltwater intrusion in coastal aquifers, and in particular on the realization of conceptual schemes to evaluate the risk associated with it. Saltwater intrusion depends on different natural and anthropic factors, both presenting a strong aleatory behaviour, that should be considered for an optimal management of the territory and water resources. Given the uncertainty of problem parameters, the risk associated with salinization needs to be cast in a probabilistic framework. On the basis of a widely adopted sharp interface formulation, key hydrogeological problem parameters are modeled as random variables, and global sensitivity analysis is used to determine their influence on the position of saltwater interface. The analyses presented in this work rely on an efficient model reduction technique, based on Polynomial Chaos Expansion, able to combine the best description of the model without great computational burden. When the assumptions of classical analytical models are not respected, and this occurs several times in the applications to real cases of study, as in the area analyzed in the present work, one can adopt data-driven techniques, based on the analysis of the data characterizing the system under study. It follows that a model can be defined on the basis of connections between the system state variables, with only a limited number of assumptions about the "physical" behaviour of the system.
Resumo:
Computing the weighted geometric mean of large sparse matrices is an operation that tends to become rapidly intractable, when the size of the matrices involved grows. However, if we are not interested in the computation of the matrix function itself, but just in that of its product times a vector, the problem turns simpler and there is a chance to solve it even when the matrix mean would actually be impossible to compute. Our interest is motivated by the fact that this calculation has some practical applications, related to the preconditioning of some operators arising in domain decomposition of elliptic problems. In this thesis, we explore how such a computation can be efficiently performed. First, we exploit the properties of the weighted geometric mean and find several equivalent ways to express it through real powers of a matrix. Hence, we focus our attention on matrix powers and examine how well-known techniques can be adapted to the solution of the problem at hand. In particular, we consider two broad families of approaches for the computation of f(A) v, namely quadrature formulae and Krylov subspace methods, and generalize them to the pencil case f(A\B) v. Finally, we provide an extensive experimental evaluation of the proposed algorithms and also try to assess how convergence speed and execution time are influenced by some characteristics of the input matrices. Our results suggest that a few elements have some bearing on the performance and that, although there is no best choice in general, knowing the conditioning and the sparsity of the arguments beforehand can considerably help in choosing the best strategy to tackle the problem.
Resumo:
Coarse graining is a popular technique used in physics to speed up the computer simulation of molecular fluids. An essential part of this technique is a method that solves the inverse problem of determining the interaction potential or its parameters from the given structural data. Due to discrepancies between model and reality, the potential is not unique, such that stability of such method and its convergence to a meaningful solution are issues.rnrnIn this work, we investigate empirically whether coarse graining can be improved by applying the theory of inverse problems from applied mathematics. In particular, we use the singular value analysis to reveal the weak interaction parameters, that have a negligible influence on the structure of the fluid and which cause non-uniqueness of the solution. Further, we apply a regularizing Levenberg-Marquardt method, which is stable against the mentioned discrepancies. Then, we compare it to the existing physical methods - the Iterative Boltzmann Inversion and the Inverse Monte Carlo method, which are fast and well adapted to the problem, but sometimes have convergence problems.rnrnFrom analysis of the Iterative Boltzmann Inversion, we elaborate a meaningful approximation of the structure and use it to derive a modification of the Levenberg-Marquardt method. We engage the latter for reconstruction of the interaction parameters from experimental data for liquid argon and nitrogen. We show that the modified method is stable, convergent and fast. Further, the singular value analysis of the structure and its approximation allows to determine the crucial interaction parameters, that is, to simplify the modeling of interactions. Therefore, our results build a rigorous bridge between the inverse problem from physics and the powerful solution tools from mathematics. rn
Resumo:
The problem of localizing a scatterer, which represents a tumor, in a homogeneous circular domain, which represents a breast, is addressed. A breast imaging method based on microwaves is considered. The microwave imaging involves to several techniques for detecting, localizing and characterizing tumors in breast tissues. In all such methods an electromagnetic inverse scattering problem exists. For the scattering detection method, an algorithm based on a linear procedure solution, inspired by MUltiple SIgnal Classification algorithm (MUSIC) and Time Reversal method (TR), is implemented. The algorithm returns a reconstructed image of the investigation domain in which it is detected the scatterer position. This image is called pseudospectrum. A preliminary performance analysis of the algorithm vying the working frequency is performed: the resolution and the signal-to-noise ratio of the pseudospectra are improved if a multi-frequency approach is considered. The Geometrical Mean-MUSIC algorithm (GM- MUSIC) is proposed as multi-frequency method. The performance of the GMMUSIC is tested in different real life computer simulations. The performed analysis shows that the algorithm detects the scatterer until the electrical parameters of the breast are known. This is an evident limit, since, in a real life situation, the anatomy of the breast is unknown. An improvement in GM-MUSIC is proposed: the Eye-GMMUSIC algorithm. Eye-GMMUSIC algorithm needs no a priori information on the electrical parameters of the breast. It is an optimizing algorithm based on the pattern search algorithm: it searches the breast parameters which minimize the Signal-to-Clutter Mean Ratio (SCMR) in the signal. Finally, the GM-MUSIC and the Eye-GMMUSIC algorithms are tested on a microwave breast cancer detection system consisting of an dipole antenna, a Vector Network Analyzer and a novel breast phantom built at University of Bologna. The reconstruction of the experimental data confirm the GM-MUSIC ability to localize a scatterer in a homogeneous medium.
Resumo:
This thesis work aims to find a procedure for isolating specific features of the current signal from a plasma focus for medical applications. The structure of the current signal inside a plasma focus is exclusive of this class of machines and a specific analysis procedure has to be developed. The hope is to find one or more features that shows a correlation with the dose erogated. The study of the correlation between the current discharge signal and the dose delivered by a plasma focus could be of some importance not only for the practical application of dose prediction but also for expanding the knowledge anbout the plasma focus physics. Vatious classes of time-frequency analysis tecniques are implemented in order to solve the problem.
Resumo:
Percutaneous nephrolithotomy (PCNL) for the treatment of renal stones and other related renal diseases has proved its efficacy and has stood the test of time compared with open surgical methods and extracorporal shock wave lithotripsy. However, access to the collecting system of the kidney is not easy because the available intra-operative image modalities only provide a two dimensional view of the surgical scenario. With this lack of visual information, several punctures are often necessary which, increases the risk of renal bleeding, splanchnic, vascular or pulmonary injury, or damage to the collecting system which sometimes makes the continuation of the procedure impossible. In order to address this problem, this paper proposes a workflow for introduction of a stereotactic needle guidance system for PCNL procedures. An analysis of the imposed clinical requirements, and a instrument guidance approach to provide the physician with a more intuitive planning and visual guidance to access the collecting system of the kidney are presented.
Resumo:
This paper describes informatics for cross-sample analysis with comprehensive two-dimensional gas chromatography (GCxGC) and high-resolution mass spectrometry (HRMS). GCxGC-HRMS analysis produces large data sets that are rich with information, but highly complex. The size of the data and volume of information requires automated processing for comprehensive cross-sample analysis, but the complexity poses a challenge for developing robust methods. The approach developed here analyzes GCxGC-HRMS data from multiple samples to extract a feature template that comprehensively captures the pattern of peaks detected in the retention-times plane. Then, for each sample chromatogram, the template is geometrically transformed to align with the detected peak pattern and generate a set of feature measurements for cross-sample analyses such as sample classification and biomarker discovery. The approach avoids the intractable problem of comprehensive peak matching by using a few reliable peaks for alignment and peak-based retention-plane windows to define comprehensive features that can be reliably matched for cross-sample analysis. The informatics are demonstrated with a set of 18 samples from breast-cancer tumors, each from different individuals, six each for Grades 1-3. The features allow classification that matches grading by a cancer pathologist with 78% success in leave-one-out cross-validation experiments. The HRMS signatures of the features of interest can be examined for determining elemental compositions and identifying compounds.
Resumo:
Objective: To compare clinical outcomes after laparoscopic cholecystectomy (LC) for acute cholecystitis performed at various time-points after hospital admission. Background: Symptomatic gallstones represent an important public health problem with LC the treatment of choice. LC is increasingly offered for acute cholecystitis, however, the optimal time-point for LC in this setting remains a matter of debate. Methods: Analysis was based on the prospective database of the Swiss Association of Laparoscopic and Thoracoscopic Surgery and included patients undergoing emergency LC for acute cholecystitis between 1995 and 2006, grouped according to the time-points of LC since hospital admission (admission day (d0), d1, d2, d3, d4/5, d ≥6). Linear and generalized linear regression models assessed the effect of timing of LC on intra- or postoperative complications, conversion and reoperation rates and length of postoperative hospital stay. Results: Of 4113 patients, 52.8% were female, median age was 59.8 years. Delaying LC resulted in significantly higher conversion rates (from 11.9% at d0 to 27.9% at d ≥6 days after admission, P < 0.001), surgical postoperative complications (5.7% to 13%, P < 0.001) and re-operation rates (0.9% to 3%, P = 0.007), with a significantly longer postoperative hospital stay (P < 0.001). Conclusions: Delaying LC for acute cholecystitis has no advantages, resulting in significantly increased conversion/re-operation rate, postoperative complications and longer postoperative hospital stay. This investigation—one of the largest in the literature—provides compelling evidence that acute cholecystitis merits surgery within 48 hours of hospital admission if impact on the patient and health care system is to be minimized.
Resumo:
Injury from interpersonal violence is a major social and medical problem in the industrialized world. Little is known about the trends in prevalence and injury pattern or about the demographic characteristics of the patients involved.
Resumo:
BACKGROUND: Despite recent algorithmic and conceptual progress, the stoichiometric network analysis of large metabolic models remains a computationally challenging problem. RESULTS: SNA is a interactive, high performance toolbox for analysing the possible steady state behaviour of metabolic networks by computing the generating and elementary vectors of their flux and conversions cones. It also supports analysing the steady states by linear programming. The toolbox is implemented mainly in Mathematica and returns numerically exact results. It is available under an open source license from: http://bioinformatics.org/project/?group_id=546. CONCLUSION: Thanks to its performance and modular design, SNA is demonstrably useful in analysing genome scale metabolic networks. Further, the integration into Mathematica provides a very flexible environment for the subsequent analysis and interpretation of the results.
Resumo:
Vaccines with limited ability to prevent HIV infection may positively impact the HIV/AIDS pandemic by preventing secondary transmission and disease in vaccine recipients who become infected. To evaluate the impact of vaccination on secondary transmission and disease, efficacy trials assess vaccine effects on HIV viral load and other surrogate endpoints measured after infection. A standard test that compares the distribution of viral load between the infected subgroups of vaccine and placebo recipients does not assess a causal effect of vaccine, because the comparison groups are selected after randomization. To address this problem, we formulate clinically relevant causal estimands using the principal stratification framework developed by Frangakis and Rubin (2002), and propose a class of logistic selection bias models whose members identify the estimands. Given a selection model in the class, procedures are developed for testing and estimation of the causal effect of vaccination on viral load in the principal stratum of subjects who would be infected regardless of randomization assignment. We show how the procedures can be used for a sensitivity analysis that quantifies how the causal effect of vaccination varies with the presumed magnitude of selection bias.