931 resultados para statistical study


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study concerns the acoustical characterisation of Italian historical theatres. It moved from the ISO 3382 which provides the guidelines for the measurement of a well established set of room acoustic parameters inside performance spaces. Nevertheless, the peculiarity of Italian historical theatres needs a more specific approach. The Charter of Ferrara goes in this direction, aiming at qualifying the sound field in this kind of halls and the present work pursues the way forward. Trying to understand how the acoustical qualification should be done, the Bonci Theatre in Cesena has been taken as a case study. In September 2012 acoustical measurements were carried out in the theatre, recording monaural e binaural impulse responses at each seat in the hall. The values of the time criteria, energy criteria and psycho-acoustical and spatial criteria have been extracted according to ISO 3382. Statistics were performed and a 3D model of the theatre was realised and tuned. Statistical investigations were carried out on the whole set of measurement positions and on carefully chosen reduced subsets; it turned out that these subsets are representative only of the “average” acoustics of the hall. Normality tests were carried out to verify whether EDT, T30 and C80 could be described with some degree of reliability with a theoretical distribution. Different results, according to the varying assumptions underlying each test, were found. Finally, an attempt was made to correlate the numerical results emerged from the statistical analysis to the perceptual sphere. Looking for “acoustical equivalent areas”, relative difference limens were considered as threshold values. No rule of thumb emerged. Finally, the significance of the usual representation through mean values and standard deviation, which may be meaningful for normal distributed data, was investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multidetector row computed tomography over the last decade is commonly used in veterinary medicine. This new technology has an increased spatial and temporal resolution, could evaluate wider scanning range in shorter scanning time, providing an advanced imaging modality. Computed tomography angiographic studies are commonly used in veterinary medicine in order to evaluate vascular structures of the abdomen and the thorax. Pulmonary pathology in feline patients is a very common condition and usually is further evaluating with computed tomography. Up to date few references of the normal computed tomographic aspects of the feline thorax are reported. In this study a computed tomographic pulmonary angiography (CTPA) protocol is reported in normal cats and is compared with the up to date anatomical references. A CTPA protocol using a 64 MDCT in our study achieved high resolution images of the pulmonary arteries, pulmonary veins and bronchial lumen till the level of minor segmental branches. Feline pulmonary bronchial parenchyma demonstrates an architecture of mixed type with a monopedial model observed in the most anatomical parts and the dichotomic aspect is seen at the accessory lobe. The arterial and venous architecture is similar to the bronchial. Statistical analysis demonstrates the linear correlation of tracheal diameter to the felines weight. Vascular variations were noticed. The pulmonary venous system enters into the left atrium through three ostia (left cranial ostia: consisted of the anastomosis of the cranial and caudal portion of the left cranial pulmonary vein; right ostia: consisted of the anastomosis of the right cranial and middle pulmonary vein; and the caudal ostia: consisted of the anastomosis of the right and left caudal pulmonary vein). In conclusion CTPA is applicable in feline patients and provides an excellent imaging of the pulmonary arterial, venous and bronchial system till the level of minor segmental branches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hypernuclear physics is currently attracting renewed interest, due tornthe important role of hypernuclei spectroscopy rn(hyperon-hyperon and hyperon-nucleon interactions) rnas a unique toolrnto describe the baryon-baryon interactions in a unified way and to rnunderstand the origin of their short-range.rnrnHypernuclear research will be one of the main topics addressed by the {sc PANDA} experimentrnat the planned Facility for Antiproton and Ion Research {sc FAIR}.rnThanks to the use of stored $overline{p}$ beams, copiousrnproduction of double $Lambda$ hypernuclei is expected at thern{sc PANDA} experiment, which will enable high precision $gamma$rnspectroscopy of such nuclei for the first time.rnAt {sc PANDA} excited states of $Xi^-$ hypernuclei will be usedrnas a basis for the formation of double $Lambda$ hypernuclei.rnFor their detection, a devoted hypernuclear detector setup is planned. This setup consists ofrna primary nuclear target for the production of $Xi^{-}+overline{Xi}$ pairs, a secondary active targetrnfor the hypernuclei formation and the identification of associated decay products and a germanium array detector to perform $gamma$ spectroscopy.rnrnIn the present work, the feasibility of performing high precision $gamma$rnspectroscopy of double $Lambda$ hypernuclei at the {sc PANDA} experiment has been studiedrnby means of a Monte Carlo simulation. For this issue, the designing and simulation of the devoted detector setup as well as of the mechanism to produce double $Lambda$ hypernuclei have been optimizedrntogether with the performance of the whole system. rnIn addition, the production yields of double hypernuclei in excitedrnparticle stable states have been evaluated within a statistical decay model.rnrnA strategy for the unique assignment of various newly observed $gamma$-transitions rnto specific double hypernuclei has been successfully implemented by combining the predicted energy spectra rnof each target with the measurement of two pion momenta from the subsequent weak decays of a double hypernucleus.rn% Indeed, based on these Monte Carlo simulation, the analysis of the statistical decay of $^{13}_{Lambda{}Lambda}$B has been performed. rn% As result, three $gamma$-transitions associated to the double hypernuclei $^{11}_{Lambda{}Lambda}$Bern% and to the single hyperfragments $^{4}_{Lambda}$H and $^{9}_{Lambda}$Be, have been well identified.rnrnFor the background handling a method based on time measurement has also been implemented.rnHowever, the percentage of tagged events related to the production of $Xi^{-}+overline{Xi}$ pairs, variesrnbetween 20% and 30% of the total number of produced events of this type. As a consequence, further considerations have to be made to increase the tagging efficiency by a factor of 2.rnrnThe contribution of the background reactions to the radiation damage on the germanium detectorsrnhas also been studied within the simulation. Additionally, a test to check the degradation of the energyrnresolution of the germanium detectors in the presence of a magnetic field has also been performed.rnNo significant degradation of the energy resolution or in the electronics was observed. A correlationrnbetween rise time and the pulse shape has been used to correct the measured energy. rnrnBased on the present results, one can say that the performance of $gamma$ spectroscopy of double $Lambda$ hypernuclei at the {sc PANDA} experiment seems feasible.rnA further improvement of the statistics is needed for the background rejection studies. Moreover, a more realistic layout of the hypernuclear detectors has been suggested using the results of these studies to accomplish a better balance between the physical and the technical requirements.rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Precision measurements of observables in neutron beta decay address important open questions of particle physics and cosmology. In this thesis, a measurement of the proton recoil spectrum with the spectrometer aSPECT is described. From this spectrum the antineutrino-electron angular correlation coefficient a can be derived. In our first beam time at the FRM II in Munich, background instabilities prevented us from presenting a new value for a. In the latest beam time at the ILL in Grenoble, the background has been reduced sufficiently. As a result of the data analysis, we identified and fixed a problem in the detector electronics which caused a significant systematic error. The aim of the latest beam time was a new value for a with an error well below the present literature value of 4%. A statistical accuracy of about 1.4% was reached, but we could only set upper limits on the correction of the problem in the detector electronics, too high to determine a meaningful result. This thesis focused on the investigation of different systematic effects. With the knowledge of the systematics gained in this thesis, we are able to improve aSPECT to perform a 1% measurement of a in a further beam time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the scientific achievement of the last decades in the astrophysical and cosmological fields, the majority of the Universe energy content is still unknown. A potential solution to the “missing mass problem” is the existence of dark matter in the form of WIMPs. Due to the very small cross section for WIMP-nuleon interactions, the number of expected events is very limited (about 1 ev/tonne/year), thus requiring detectors with large target mass and low background level. The aim of the XENON1T experiment, the first tonne-scale LXe based detector, is to be sensitive to WIMP-nucleon cross section as low as 10^-47 cm^2. To investigate the possibility of such a detector to reach its goal, Monte Carlo simulations are mandatory to estimate the background. To this aim, the GEANT4 toolkit has been used to implement the detector geometry and to simulate the decays from the various background sources: electromagnetic and nuclear. From the analysis of the simulations, the level of background has been found totally acceptable for the experiment purposes: about 1 background event in a 2 tonne-years exposure. Indeed, using the Maximum Gap method, the XENON1T sensitivity has been evaluated and the minimum for the WIMP-nucleon cross sections has been found at 1.87 x 10^-47 cm^2, at 90% CL, for a WIMP mass of 45 GeV/c^2. The results have been independently cross checked by using the Likelihood Ratio method that confirmed such results with an agreement within less than a factor two. Such a result is completely acceptable considering the intrinsic differences between the two statistical methods. Thus, in the PhD thesis it has been proven that the XENON1T detector will be able to reach the designed sensitivity, thus lowering the limits on the WIMP-nucleon cross section by about 2 orders of magnitude with respect to the current experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The uncertainties in the determination of the stratigraphic profile of natural soils is one of the main problems in geotechnics, in particular for landslide characterization and modeling. The study deals with a new approach in geotechnical modeling which relays on a stochastic generation of different soil layers distributions, following a boolean logic – the method has been thus called BoSG (Boolean Stochastic Generation). In this way, it is possible to randomize the presence of a specific material interdigitated in a uniform matrix. In the building of a geotechnical model it is generally common to discard some stratigraphic data in order to simplify the model itself, assuming that the significance of the results of the modeling procedure would not be affected. With the proposed technique it is possible to quantify the error associated with this simplification. Moreover, it could be used to determine the most significant zones where eventual further investigations and surveys would be more effective to build the geotechnical model of the slope. The commercial software FLAC was used for the 2D and 3D geotechnical model. The distribution of the materials was randomized through a specifically coded MatLab program that automatically generates text files, each of them representing a specific soil configuration. Besides, a routine was designed to automate the computation of FLAC with the different data files in order to maximize the sample number. The methodology is applied with reference to a simplified slope in 2D, a simplified slope in 3D and an actual landslide, namely the Mortisa mudslide (Cortina d’Ampezzo, BL, Italy). However, it could be extended to numerous different cases, especially for hydrogeological analysis and landslide stability assessment, in different geological and geomorphological contexts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many of developing countries are facing crisis in water management due to increasing of population, water scarcity, water contaminations and effects of world economic crisis. Water distribution systems in developing countries are facing many challenges of efficient repair and rehabilitation since the information of water network is very limited, which makes the rehabilitation assessment plans very difficult. Sufficient information with high technology in developed countries makes the assessment for rehabilitation easy. Developing countries have many difficulties to assess the water network causing system failure, deterioration of mains and bad water quality in the network due to pipe corrosion and deterioration. The limited information brought into focus the urgent need to develop economical assessment for rehabilitation of water distribution systems adapted to water utilities. Gaza Strip is subject to a first case study, suffering from severe shortage in the water supply and environmental problems and contamination of underground water resources. This research focuses on improvement of water supply network to reduce the water losses in water network based on limited database using techniques of ArcGIS and commercial water network software (WaterCAD). A new approach for rehabilitation water pipes has been presented in Gaza city case study. Integrated rehabilitation assessment model has been developed for rehabilitation water pipes including three components; hydraulic assessment model, Physical assessment model and Structural assessment model. WaterCAD model has been developed with integrated in ArcGIS to produce the hydraulic assessment model for water network. The model have been designed based on pipe condition assessment with 100 score points as a maximum points for pipe condition. As results from this model, we can indicate that 40% of water pipeline have score points less than 50 points and about 10% of total pipes length have less than 30 score points. By using this model, the rehabilitation plans for each region in Gaza city can be achieved based on available budget and condition of pipes. The second case study is Kuala Lumpur Case from semi-developed countries, which has been used to develop an approach to improve the water network under crucial conditions using, advanced statistical and GIS techniques. Kuala Lumpur (KL) has water losses about 40% and high failure rate, which make severe problem. This case can represent cases in South Asia countries. Kuala Lumpur faced big challenges to reduce the water losses in water network during last 5 years. One of these challenges is high deterioration of asbestos cement (AC) pipes. They need to replace more than 6500 km of AC pipes, which need a huge budget to be achieved. Asbestos cement is subject to deterioration due to various chemical processes that either leach out the cement material or penetrate the concrete to form products that weaken the cement matrix. This case presents an approach for geo-statistical model for modelling pipe failures in a water distribution network. Database of Syabas Company (Kuala Lumpur water company) has been used in developing the model. The statistical models have been calibrated, verified and used to predict failures for both networks and individual pipes. The mathematical formulation developed for failure frequency in Kuala Lumpur was based on different pipeline characteristics, reflecting several factors such as pipe diameter, length, pressure and failure history. Generalized linear model have been applied to predict pipe failures based on District Meter Zone (DMZ) and individual pipe levels. Based on Kuala Lumpur case study, several outputs and implications have been achieved. Correlations between spatial and temporal intervals of pipe failures also have been done using ArcGIS software. Water Pipe Assessment Model (WPAM) has been developed using the analysis of historical pipe failure in Kuala Lumpur which prioritizing the pipe rehabilitation candidates based on ranking system. Frankfurt Water Network in Germany is the third main case study. This case makes an overview for Survival analysis and neural network methods used in water network. Rehabilitation strategies of water pipes have been developed for Frankfurt water network in cooperation with Mainova (Frankfurt Water Company). This thesis also presents a methodology of technical condition assessment of plastic pipes based on simple analysis. This thesis aims to make contribution to improve the prediction of pipe failures in water networks using Geographic Information System (GIS) and Decision Support System (DSS). The output from the technical condition assessment model can be used to estimate future budget needs for rehabilitation and to define pipes with high priority for replacement based on poor condition. rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die Produktion von Hyperkernen wurde in peripheren Schwerionenreaktionen untersucht, bei denen eine Kohlenstofffolie mit $^6$Li Projektilen mit einer Strahlenergie von $2 A$~GeV bestrahlt wurde. Es konnten klare Signale f{"{u}}r $Lambda$, $^3_{Lambda}$H, $^4_{Lambda}$H in deren jeweiligen invarianten Massenverteilungen aus Mesonenzerfall beobachtet werden.rnrnIn dieser Arbeit wird eine unabh{"{a}}ngige Datenauswertung vorgelegt, die eine Verifizierung fr"{u}herer Ergebnisse der HypHI Kollaboration zum Ziel hatte. Zu diesem Zweck wurde eine neue Track-Rekonstruktion, basierend auf einem Kalman-Filter-Ansatz, und zwei unterschiedliche Algorithmen zur Rekonstruktion sekund"{a}rer Vertices entwickelt.rn%-Rekonstruktionsalgorithmen .rnrnDie invarianten Massen des $Lambda$-Hyperon und der $^3_{Lambda}$H- und $^4_{Lambda}$H-Hyperkerne wurden mit $1109.6 pm 0.4$, $2981.0 pm 0.3$ und $3898.1 pm 0.7$~MeV$/c^2$ und statistischen Signifikanzen von $9.8sigma$, $12.8sigma$ beziehungsweise $7.3sigma$ bestimmt. Die in dieser Arbeit erhaltenen Ergebnisse stimmen mit der fr{"{u}}heren Auswertung {"{u}}berein.rnrnDas Ausbeutenverh{"{a}}ltnis der beiden Hyperkerne wurde als $N(^3_{Lambda}$H)/$N(^4_{Lambda}$H)$ sim 3$ bestimmt. Das deutet darauf hin, dass der Produktionsmechanismus f{"{u}}r Hyperkerne in Schwerionen-induzierten Reaktionen im Projektil-Rapidit{"{a}}tsbereich nicht allein durch einen Koaleszenzmechanismus beschrieben werden kann, sondern dass auch sekund{"{a}}re Pion-/Kaon-induzierte Reaktionen und Fermi-Aufbruch involviert sind.rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The subject of this work concerns the study of the immigration phenomenon, with emphasis on the aspects related to the integration of an immigrant population in a hosting one. Aim of this work is to show the forecasting ability of a recent finding where the behavior of integration quantifiers was analyzed and investigated with a mathematical model of statistical physics origins (a generalization of the monomer dimer model). After providing a detailed literature review of the model, we show that not only such a model is able to identify the social mechanism that drives a particular integration process, but it also provides correct forecast. The research reported here proves that the proposed model of integration and its forecast framework are simple and effective tools to reduce uncertainties about how integration phenomena emerge and how they are likely to develop in response to increased migration levels in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The width of the 21 cm line (HI) emitted by spiral galaxies depends on the physical processes that release energy in the Interstellar Medium (ISM). This quantity is called velocity dispersion (σ) and it is proportional first of all to the thermal kinetic energy of the gas. The accepted theoretical picture predicts that the neutral hydrogen component (HI) exists in the ISM in two stable phases: a cold one (CNM, with σ~0.8 km/s) and a warm one (WNM, with σ~8 km/s). However, this is called into question by the observation that the HI gas has usually larger velocity dispersions. This suggests the presence of turbulence in the ISM, although the energy sources remain unknown. In this thesis we want to shed new light on this topic. We have studied the HI line emission of two nearby galaxies: NGC6946 and M101. For the latter we used new deep observations obtained with the Westerbork radio interferometer. Through a gaussian fitting procedure, we produced dispersion maps of the two galaxies. For both of them, we compared the σ values measured in the spiral arms with those in the interarms. In NGC6946 we found that, in both arms and interarms, σ grows with the column density, while we obtained the opposite for M 101. Using a statistical analysis we did not find a significant difference between arm and interarm dispersion distributions. Producing star formation rate density maps (SFRD) of the galaxies, we studied their global and local relations with the HI kinetic energy, as inferred from the measured dispersions. For NGC6946 we obtained a good log-log correlation, in agreement with a simple model of supernova feedback driven turbulence. This shows that in this galaxy turbulent motions are mainly induced by the stellar activity. For M 101 we did not find an analogous correlation, since the gas kinetic energy appears constant with the SFRD. We think that this may indicate that in this galaxy turbulence is driven also by accretion of extragalactic material.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Extended cluster radio galaxies show different morphologies com- pared to those found isolated in the field. Indeed, symmetric double radio galaxies are only a small percentage of the total content of ra- dio loud cluster galaxies, which show mainly tailed morphologies (e.g. O’Dea & Owen, 1985). Moreover, cluster mergers can deeply affect the statistical properties of their radio activity. In order to better understand the morphological and radio activity differences of the radio galaxies in major mergeing and non/tidal-merging clusters, we performed a multifrequency study of extended radio galax- ies inside two cluster complexes, A3528 and A3558. They belong to the innermost region of the Shapley Concentration, the most massive con- centration of galaxy clusters (termed supercluster) in the local Universe, at average redshift z ≈ 0.043. We analysed low frequency radio data performed at 235 and 610 MHz with Giant Metrewave Radio Telescope (GMRT) and we combined them with proprietary and literature observations, in order to have a wide frequency range (150 MHz to 8.4 GHz) to perform the spectral analysis. The low frequency images allowed us to carry out a detailed study of the radio tails and diffuse emission found in some cases. The results in the radio band were also qualitatively compared with the X-ray information coming from XMM-Newton observations, in order to test the interaction between radio galaxies and cluster weather. We found that the brightest central galaxies (BCGs) in the A3528 cluster complex are powerful and present substantial emission from old relativistic plasma characterized by a steep spectrum (α > 2). In the light of observational pieces of evidence, we suggest they are possible re-started radio galaxies. On the other hand, the tailed radio galaxies trace the host galaxy motion with respect to the ICM, and our find- ings is consistent with the dynamical interpretation of a tidal interaction (Gastaldello et al. 2003). On the contrary, the BCGs in the A3558 clus- ter complex are either quiet or very faint radio galaxies, supporting the hypothesis that clusters mergers quench the radio emission from AGN.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monomer-dimer models are amongst the models in statistical mechanics which found application in many areas of science, ranging from biology to social sciences. This model describes a many-body system in which monoatomic and diatomic particles subject to hard-core interactions get deposited on a graph. In our work we provide an extension of this model to higher-order particles. The aim of our work is threefold: first we study the thermodynamic properties of the newly introduced model. We solve analytically some regular cases and find that, differently from the original, our extension admits phase transitions. Then we tackle the inverse problem, both from an analytical and numerical perspective. Finally we propose an application to aggregation phenomena in virtual messaging services.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: The aim of the present study was to histologically evaluate and compare a new prototype collagen type I/III-containing equine- (EB) and a bovine- (BB) derived cancellous bone block in a dog model. MATERIALS AND METHODS: Four standardized box-shaped defects were bilaterally created at the buccal aspect of the alveolar ridge in the lower jaws of five beagle dogs and randomly allocated to either EB or BB. Each experimental site was covered by a native (non-crosslinked) collagen membrane and left to heal in a submerged position for 12 weeks. Dissected blocks were processed for semi-/and quantitative analyses. RESULTS: Both groups had no adverse clinical or histopathological events (i.e. inflammatory/foreign body reactions). BB specimens revealed no signs of biodegradation and were commonly embedded in a fibrous connective tissue. New bone formation and bony graft integration were minimal. In contrast, EB specimens were characterized by a significantly increased cell (i.e. osteoclasts and multinucleated giant cells)-mediated degradation of the graft material (P<0.001). The amount and extent of bone ingrowth was consistently higher in all EB specimens, but failed to reach statistical significance in comparison with the BB group (P>0.05). CONCLUSIONS: It was concluded that the application of EB may not be associated with an improved bone formation than BB.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical shape models (SSMs) have been used widely as a basis for segmenting and interpreting complex anatomical structures. The robustness of these models are sensitive to the registration procedures, i.e., establishment of a dense correspondence across a training data set. In this work, two SSMs based on the same training data set of scoliotic vertebrae, and registration procedures were compared. The first model was constructed based on the original binary masks without applying any image pre- and post-processing, and the second was obtained by means of a feature preserving smoothing method applied to the original training data set, followed by a standard rasterization algorithm. The accuracies of the correspondences were assessed quantitatively by means of the maximum of the mean minimum distance (MMMD) and Hausdorf distance (H(D)). Anatomical validity of the models were quantified by means of three different criteria, i.e., compactness, specificity, and model generalization ability. The objective of this study was to compare quasi-identical models based on standard metrics. Preliminary results suggest that the MMMD distance and eigenvalues are not sensitive metrics for evaluating the performance and robustness of SSMs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose To evaluate the degree of psychological distress in adult childhood cancer survivors in Switzerland and to characterize survivors with significant distress. Methods Childhood cancer survivors who were age younger than 16 years when diagnosed between 1976 and 2003, had survived more than 5 years, and were currently age 20 years or older received a postal questionnaire. Psychological distress was assessed using the Brief Symptom Inventory (BSI). Raw scores were transformed into T scores according to the German norm sample, and the proportion of participants being at increased risk for psychological distress was calculated (case rule: T ≥ 63). t tests and univariable and multivariable logistic regressions were used for statistical analyses. Results One thousand seventy-six survivors (63.% of eligible survivors, 71.9% of contacted survivors) returned the questionnaire, 987 with complete data on BSI. Comparison with the norm populations showed lower T scores (T < 50) in the Global Severity Index (GSI; T = 46.2), somatization (T = 47.6), obsessive-compulsive tendencies (T = 46.9), and anxiety (T = 48.4). However, more childhood cancer survivors (especially women) had increased distress for GSI (14.4%), interpersonal sensitivity (16.5%), depression (13.4%), aggression (16.9%), and psychotic tendencies (15.6%) than the expected 10% from the norm population. Caseness was associated with female sex, being a single child, older age at study, and self-reported late effects, especially psychological problems. Conclusion Results show that childhood cancer survivors, on average, have less psychological distress than a norm population but that the proportion of survivors at risk for high psychological distress is disproportionally large. Monitoring psychological distress in childhood cancer survivors may be desirable during routine follow-up, and psychological support should be offered as needed.