926 resultados para Elementary Methods In Number Theory
Resumo:
In the last couple of decades we assisted to a reappraisal of spatial design-based techniques. Usually the spatial information regarding the spatial location of the individuals of a population has been used to develop efficient sampling designs. This thesis aims at offering a new technique for both inference on individual values and global population values able to employ the spatial information available before sampling at estimation level by rewriting a deterministic interpolator under a design-based framework. The achieved point estimator of the individual values is treated both in the case of finite spatial populations and continuous spatial domains, while the theory on the estimator of the population global value covers the finite population case only. A fairly broad simulation study compares the results of the point estimator with the simple random sampling without replacement estimator in predictive form and the kriging, which is the benchmark technique for inference on spatial data. The Monte Carlo experiment is carried out on populations generated according to different superpopulation methods in order to manage different aspects of the spatial structure. The simulation outcomes point out that the proposed point estimator has almost the same behaviour as the kriging predictor regardless of the parameters adopted for generating the populations, especially for low sampling fractions. Moreover, the use of the spatial information improves substantially design-based spatial inference on individual values.
Resumo:
X-ray absorption spectroscopy (XAS) is a powerful means of investigation of structural and electronic properties in condensed -matter physics. Analysis of the near edge part of the XAS spectrum, the so – called X-ray Absorption Near Edge Structure (XANES), can typically provide the following information on the photoexcited atom: - Oxidation state and coordination environment. - Speciation of transition metal compounds. - Conduction band DOS projected on the excited atomic species (PDOS). Analysis of XANES spectra is greatly aided by simulations; in the most common scheme the multiple scattering framework is used with the muffin tin approximation for the scattering potential and the spectral simulation is based on a hypothetical, reference structure. This approach has the advantage of requiring relatively little computing power but in many cases the assumed structure is quite different from the actual system measured and the muffin tin approximation is not adequate for low symmetry structures or highly directional bonds. It is therefore very interesting and justified to develop alternative methods. In one approach, the spectral simulation is based on atomic coordinates obtained from a DFT (Density Functional Theory) optimized structure. In another approach, which is the object of this thesis, the XANES spectrum is calculated directly based on an ab – initio DFT calculation of the atomic and electronic structure. This method takes full advantage of the real many-electron final wavefunction that can be computed with DFT algorithms that include a core-hole in the absorbing atom to compute the final cross section. To calculate the many-electron final wavefunction the Projector Augmented Wave method (PAW) is used. In this scheme, the absorption cross section is written in function of several contributions as the many-electrons function of the finale state; it is calculated starting from pseudo-wavefunction and performing a reconstruction of the real-wavefunction by using a transform operator which contains some parameters, called partial waves and projector waves. The aim of my thesis is to apply and test the PAW methodology to the calculation of the XANES cross section. I have focused on iron and silicon structures and on some biological molecules target (myoglobin and cytochrome c). Finally other inorganic and biological systems could be taken into account for future applications of this methodology, which could become an important improvement with respect to the multiscattering approach.
Resumo:
Lo studio analizza il modo in cui la storia dell’arte e la visual culture vengono utilizzate all’interno delle medical humanities, e cerca di suggerire un metodo più utile rispetto a quelli fin qui proposti. Lo scritto è organizzato in due parti. Nella prima parte sono analizzate alcune teorie e pratiche delle scienze umane in medicina. In particolare, ci concentriamo sulla medicina narrativa e sugli approcci con cui la storia dell’arte viene inclusa nella maggioranza dei programmi di medical humanities. Dopodiché, proponiamo di riconsiderare questi metodi e di implementare il ruolo di un pensiero storico e visivo all’interno di tali insegnamenti. Nella seconda parte, alla luce di quanto emerso nella prima, ci dedichiamo a uno studio di caso: la rappresentazione della melanconia amorosa, o mal d’amore, in una serie di dipinti olandesi del Secolo d’Oro. Colleghiamo queste opere a trattati medico-filosofici dell’epoca che permettano di inquadrare il mal d’amore in un contesto storico; in seguito, analizziamo alcune interpretazioni fornite da studiosi e storici dell’arte a noi contemporanei. In particolare, esaminiamo lo studio pionieristico di Henry Meige, pubblicato sulla “Nouvelle iconographie de la Salpêtrière” nel 1899, da cui emerge la possibilità di un confronto critico sia con le posizioni iconodiagnostiche di Charcot e Richer sia con quelle della prima psicoanalisi.
Resumo:
The conventional way to calculate hard scattering processes in perturbation theory using Feynman diagrams is not efficient enough to calculate all necessary processes - for example for the Large Hadron Collider - to a sufficient precision. Two alternatives to order-by-order calculations are studied in this thesis.rnrnIn the first part we compare the numerical implementations of four different recursive methods for the efficient computation of Born gluon amplitudes: Berends-Giele recurrence relations and recursive calculations with scalar diagrams, with maximal helicity violating vertices and with shifted momenta. From the four methods considered, the Berends-Giele method performs best, if the number of external partons is eight or bigger. However, for less than eight external partons, the recursion relation with shifted momenta offers the best performance. When investigating the numerical stability and accuracy, we found that all methods give satisfactory results.rnrnIn the second part of this thesis we present an implementation of a parton shower algorithm based on the dipole formalism. The formalism treats initial- and final-state partons on the same footing. The shower algorithm can be used for hadron colliders and electron-positron colliders. Also massive partons in the final state were included in the shower algorithm. Finally, we studied numerical results for an electron-positron collider, the Tevatron and the Large Hadron Collider.
Resumo:
This thesis provides a thoroughly theoretical background in network theory and shows novel applications to real problems and data. In the first chapter a general introduction to network ensembles is given, and the relations with “standard” equilibrium statistical mechanics are described. Moreover, an entropy measure is considered to analyze statistical properties of the integrated PPI-signalling-mRNA expression networks in different cases. In the second chapter multilayer networks are introduced to evaluate and quantify the correlations between real interdependent networks. Multiplex networks describing citation-collaboration interactions and patterns in colorectal cancer are presented. The last chapter is completely dedicated to control theory and its relation with network theory. We characterise how the structural controllability of a network is affected by the fraction of low in-degree and low out-degree nodes. Finally, we present a novel approach to the controllability of multiplex networks
Resumo:
Die vorliegende Arbeit untersucht die Struktur und Zusammensetzung der untersten Atmosphäre im Rahmen der PARADE-Messkampagne (PArticles and RAdicals: Diel observations of the impact of urban and biogenic Emissions) am Kleinen Feldberg in Deutschland im Spätsommer 2011. Dazu werden Messungen von meteorologischen Grundgrößen (Temperatur, Feuchte, Druck, Windgeschwindigkeit und -richtung) zusammen mit Radiosonden und flugzeuggetragenen Messungen von Spurengasen (Kohlenstoffmonoxid, -dioxid, Ozon und Partikelanzahlkonzentrationen) ausgewertet. Ziel ist es, mit diesen Daten, die thermodynamischen und dynamischen Eigenschaften und deren Einfluss auf die chemische Luftmassenzusammensetzung in der planetaren Grenzschicht zu bestimmen. Dazu werden die Radiosonden und Flugzeugmessungen mit Lagrangeschen Methoden kombiniert und es wird zwischen rein kinematischen Modellen (LAGRANTO und FLEXTRA) sowie sogenannten Partikeldispersionsmodellen (FLEXPART) unterschieden. Zum ersten Mal wurde im Rahmen dieser Arbeit dabei auch eine Version von FLEXPART-COSMO verwendet, die von den meteorologischen Analysefeldern des Deutschen Wetterdienstes angetrieben werden. Aus verschiedenen bekannten Methoden der Grenzschichthöhenbestimmung mit Radiosondenmessungen wird die Bulk-Richardson-Zahl-Methode als Referenzmethode verwendet, da sie eine etablierte Methode sowohl für Messungen und als auch Modellanalysen darstellt. Mit einer Toleranz von 125 m, kann zu 95 % mit mindestens drei anderen Methoden eine Übereinstimmung zu der ermittelten Grenzschichthöhe festgestellt werden, was die Qualität der Grenzschichthöhe bestätigt. Die Grenzschichthöhe variiert während der Messkampagne zwischen 0 und 2000 m über Grund, wobei eine hohe Grenzschicht nach dem Durchzug von Kaltfronten beobachtet wird, hingegen eine niedrige Grenzschicht unter Hochdruckeinfluss und damit verbundener Subsidenz bei windarmen Bedingungen im Warmsektor. Ein Vergleich zwischen den Grenzschichthöhen aus Radiosonden und aus Modellen (COSMO-DE, COSMO-EU, COSMO-7) zeigt nur geringe Unterschiede um -6 bis +12% während der Kampagne am Kleinen Feldberg. Es kann allerdings gezeigt werden, dass in größeren Simulationsgebieten systematische Unterschiede zwischen den Modellen (COSMO-7 und COSMO-EU) auftreten. Im Rahmen dieser Arbeit wird deutlich, dass die Bodenfeuchte, die in diesen beiden Modellen unterschiedlich initialisiert wird, zu verschiedenen Grenzschichthöhen führt. Die Folge sind systematische Unterschiede in der Luftmassenherkunft und insbesondere der Emissionssensitivität. Des Weiteren kann lokale Mischung zwischen der Grenzschicht und der freien Troposphäre bestimmt werden. Dies zeigt sich in der zeitlichen Änderung der Korrelationen zwischen CO2 und O3 aus den Flugzeugmessungen, und wird im Vergleich mit Rückwärtstrajektorien und Radiosondenprofilen bestärkt. Das Einmischen der Luftmassen in die Grenzschicht beeinflusst dabei die chemische Zusammensetzung in der Vertikalen und wahrscheinlich auch am Boden. Diese experimentelle Studie bestätigt die Relevanz der Einmischungsprozesse aus der freien Troposphäre und die Verwendbarkeit der Korrelationsmethode, um Austausch- und Einmischungsprozesse an dieser Grenzfläche zu bestimmen.
Resumo:
Nowadays communication is switching from a centralized scenario, where communication media like newspapers, radio, TV programs produce information and people are just consumers, to a completely different decentralized scenario, where everyone is potentially an information producer through the use of social networks, blogs, forums that allow a real-time worldwide information exchange. These new instruments, as a result of their widespread diffusion, have started playing an important socio-economic role. They are the most used communication media and, as a consequence, they constitute the main source of information enterprises, political parties and other organizations can rely on. Analyzing data stored in servers all over the world is feasible by means of Text Mining techniques like Sentiment Analysis, which aims to extract opinions from huge amount of unstructured texts. This could lead to determine, for instance, the user satisfaction degree about products, services, politicians and so on. In this context, this dissertation presents new Document Sentiment Classification methods based on the mathematical theory of Markov Chains. All these approaches bank on a Markov Chain based model, which is language independent and whose killing features are simplicity and generality, which make it interesting with respect to previous sophisticated techniques. Every discussed technique has been tested in both Single-Domain and Cross-Domain Sentiment Classification areas, comparing performance with those of other two previous works. The performed analysis shows that some of the examined algorithms produce results comparable with the best methods in literature, with reference to both single-domain and cross-domain tasks, in $2$-classes (i.e. positive and negative) Document Sentiment Classification. However, there is still room for improvement, because this work also shows the way to walk in order to enhance performance, that is, a good novel feature selection process would be enough to outperform the state of the art. Furthermore, since some of the proposed approaches show promising results in $2$-classes Single-Domain Sentiment Classification, another future work will regard validating these results also in tasks with more than $2$ classes.
Resumo:
AIM: To compare the 10-year peri-implant bone loss (BL) rate in periodontally compromised (PCP) and periodontally healthy patients (PHP) around two different implant systems supporting single-unit crowns. MATERIALS AND METHODS: In this retrospective, controlled study, the mean BL (mBL) rate around dental implants placed in four groups of 20 non-smokers was evaluated after a follow-up of 10 years. Two groups of patients treated for periodontitis (PCP) and two groups of PHP were created. For each category (PCP and PHP), two different types of implant had been selected. The mBL was calculated by subtracting the radiographic bone levels at the time of crown cementation from the bone levels at the 10-year follow-up. RESULTS: The mean age, mean full-mouth plaque and full-mouth bleeding scores and implant location were similar between the four groups. Implant survival rates ranged between 85% and 95%, without statistically significant differences (P>0.05) between groups. For both implant systems, PCP showed statistically significantly higher mBL rates and number of sites with BL> or =3 mm compared with PHP (P<0.0001). CONCLUSIONS: After 10 years, implants in PCP yielded lower survival rates and higher mean marginal BL rates compared with those of implants placed in PHP. These results were independent of the implant system used or the healing modality applied.
Comparison of monte carlo collimator transport methods for photon treatment planning in radiotherapy
Resumo:
The aim of this work was a Monte Carlo (MC) based investigation of the impact of different radiation transport methods in collimators of a linear accelerator on photon beam characteristics, dose distributions, and efficiency. Thereby it is investigated if it is possible to use different simplifications in the radiation transport for some clinical situations in order to save calculation time.
Resumo:
Purpose The accuracy, efficiency, and efficacy of four commonly recommended medication safety assessment methodologies were systematically reviewed. Methods Medical literature databases were systematically searched for any comparative study conducted between January 2000 and October 2009 in which at least two of the four methodologies—incident report review, direct observation, chart review, and trigger tool—were compared with one another. Any study that compared two or more methodologies for quantitative accuracy (adequacy of the assessment of medication errors and adverse drug events) efficiency (effort and cost), and efficacy and that provided numerical data was included in the analysis. Results Twenty-eight studies were included in this review. Of these, 22 compared two of the methodologies, and 6 compared three methods. Direct observation identified the greatest number of reports of drug-related problems (DRPs), while incident report review identified the fewest. However, incident report review generally showed a higher specificity compared to the other methods and most effectively captured severe DRPs. In contrast, the sensitivity of incident report review was lower when compared with trigger tool. While trigger tool was the least labor-intensive of the four methodologies, incident report review appeared to be the least expensive, but only when linked with concomitant automated reporting systems and targeted follow-up. Conclusion All four medication safety assessment techniques—incident report review, chart review, direct observation, and trigger tool—have different strengths and weaknesses. Overlap between different methods in identifying DRPs is minimal. While trigger tool appeared to be the most effective and labor-efficient method, incident report review best identified high-severity DRPs.
Resumo:
Purpose: Acupuncture is one of the complementary medicine therapies with the greatest demand in Switzerland and many other countries in the West and in Asia. Over the past decades, the pool of scientific literature in acupuncture has markedly increased. The diagnostic methods upon which acupuncture treatment is based, have only been addressed sporadically in scientific journals. The goal of this study is to assess the use of different diagnostic methods in the acupuncture practices and to investigate similarities and differences in using these diagnostic methods between physician and non-physician acupuncturists. Methods: 44 physician acupuncturists with certificates of competence in acupuncture – traditional chinese medicine (TCM) from ASA (Assoziation Schweizer Ärztegesellschaften für Akupunktur und Chinesische Medizin: the Association of Swiss Medical Societies for Acupuncture and Chinese Medicine) and 33 non-physician acupuncturists listed in the EMR (Erfahrungsmedizinisches Register: a national register, which assigns a quality label for CAM therapists in complementary and alternative medicine) in the cantons Basel-Stadt and Basel-Land were asked to fill out a questionnaire on diagnostic methods. The responder rate was 46.8% (69.7% non-physician acupuncturists and 29, 5% physician acupuncturists). Results: The results show that both physician and non-physician acupuncturists take patients’ medical history (94%), use pulse diagnosis (89%), tongue diagnosis (83%) and palpation of body and ear acupuncture points (81%) as diagnostic methods to guide their acupuncture treatments. Between the two groups, there were significant differences in the diagnostic tools being used. Physician acupuncturists do examine their patients significantly more often with western medical methods (p<.05) than this is the case for nonphysician acupuncturists. Non-physician acupuncturists use pulse diagnosis more often than physicians (p<.05). A highly significant difference was observed in the length of time spent with collecting patients’ medical history, where nonphysician acupuncturists clearly spent more time (p<.001). Conclusion: Depending on the educational background of the acupuncturist, different diagnostic methods are used for making the diagnosis. Especially the more time consuming methods like a comprehensive anamnesis and pulse diagnosis are more frequently employed by non-physician practitioners. Further studies will clarify if these results are valid for Switzerland in general, and to what extent the differing use of diagnostic methods has an impact on the diagnosis itself and on the resulting treatment methods, as well as on the treatment success and the patients’ satisfaction.
Resumo:
The authors conducted an in vivo study to determine clinical cutoffs for a laser fluorescence (LF) device, an LF pen and a fluorescence camera (FC), as well as to evaluate the clinical performance of these methods and conventional methods in detecting occlusal caries in permanent teeth by using the histologic gold standard for total validation of the sample.
Resumo:
BACKGROUND: This study was undertaken to determine whether use of the direct renin inhibitor aliskiren would reduce cardiovascular and renal events in patients with type 2 diabetes and chronic kidney disease, cardiovascular disease, or both. METHODS: In a double-blind fashion, we randomly assigned 8561 patients to aliskiren (300 mg daily) or placebo as an adjunct to an angiotensin-converting-enzyme inhibitor or an angiotensin-receptor blocker. The primary end point was a composite of the time to cardiovascular death or a first occurrence of cardiac arrest with resuscitation; nonfatal myocardial infarction; nonfatal stroke; unplanned hospitalization for heart failure; end-stage renal disease, death attributable to kidney failure, or the need for renal-replacement therapy with no dialysis or transplantation available or initiated; or doubling of the baseline serum creatinine level. RESULTS: The trial was stopped prematurely after the second interim efficacy analysis. After a median follow-up of 32.9 months, the primary end point had occurred in 783 patients (18.3%) assigned to aliskiren as compared with 732 (17.1%) assigned to placebo (hazard ratio, 1.08; 95% confidence interval [CI], 0.98 to 1.20; P=0.12). Effects on secondary renal end points were similar. Systolic and diastolic blood pressures were lower with aliskiren (between-group differences, 1.3 and 0.6 mm Hg, respectively) and the mean reduction in the urinary albumin-to-creatinine ratio was greater (between-group difference, 14 percentage points; 95% CI, 11 to 17). The proportion of patients with hyperkalemia (serum potassium level, ≥6 mmol per liter) was significantly higher in the aliskiren group than in the placebo group (11.2% vs. 7.2%), as was the proportion with reported hypotension (12.1% vs. 8.3%) (P<0.001 for both comparisons). CONCLUSIONS: The addition of aliskiren to standard therapy with renin-angiotensin system blockade in patients with type 2 diabetes who are at high risk for cardiovascular and renal events is not supported by these data and may even be harmful. (Funded by Novartis; ALTITUDE ClinicalTrials.gov number, NCT00549757.).