936 resultados para Full-Scale Crash Test


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current cosmological dark sector (dark matter plus dark energy) is challenging our comprehension about the physical processes taking place in the Universe. Recently, some authors tried to falsify the basic underlying assumptions of such dark matterdark energy paradigm. In this Letter, we show that oversimplifications of the measurement process may produce false positives to any consistency test based on the globally homogeneous and isotropic ? cold dark matter (?CDM) model and its expansion history based on distance measurements. In particular, when local inhomogeneity effects due to clumped matter or voids are taken into account, an apparent violation of the basic assumptions (Copernican Principle) seems to be present. Conversely, the amplitude of the deviations also probes the degree of reliability underlying the phenomenological DyerRoeder procedure by confronting its predictions with the accuracy of the weak lensing approach. Finally, a new method is devised to reconstruct the effects of the inhomogeneities in a ?CDM model, and some suggestions of how to distinguish between clumpiness (or void) effects from different cosmologies are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A thorough search for large-scale anisotropies in the distribution of arrival directions of cosmic rays detected above 10(18) eV at the Pierre Auger Observatory is presented. This search is performed as a function of both declination and right ascension in several energy ranges above 10(18) eV, and reported in terms of dipolar and quadrupolar coefficients. Within the systematic uncertainties, no significant deviation from isotropy is revealed. Assuming that any cosmic-ray anisotropy is dominated by dipole and quadrupole moments in this energy range, upper limits on their amplitudes are derived. These upper limits allow us to test the origin of cosmic rays above 10(18) eV from stationary Galactic sources densely distributed in the Galactic disk and predominantly emitting light particles in all directions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A long-standing problem when testing from a deterministic finite state machine is to guarantee full fault coverage even if the faults introduce extra states in the implementations. It is well known that such tests should include the sequences in a traversal set which contains all input sequences of length defined by the number of extra states. This paper suggests the SPY method, which helps reduce the length of tests by distributing sequences of the traversal set and reducing test branching. It is also demonstrated that an additional assumption about the implementation under test relaxes the requirement of the complete traversal set. The results of the experimental comparison of the proposed method with an existing method indicate that the resulting reduction can reach 40%. Experimental results suggest that the additional assumption about the implementation can help in further reducing the test suite length. Copyright (C) 2011 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A total of 46,089 individual monthly test-day (TD) milk yields (10 test-days), from 7,331 complete first lactations of Holstein cattle were analyzed. A standard multivariate analysis (MV), reduced rank analyses fitting the first 2, 3, and 4 genetic principal components (PC2, PC3, PC4), and analyses that fitted a factor analytic structure considering 2, 3, and 4 factors (FAS2, FAS3, FAS4), were carried out. The models included the random animal genetic effect and fixed effects of the contemporary groups (herd-year-month of test-day), age of cow (linear and quadratic effects), and days in milk (linear effect). The residual covariance matrix was assumed to have full rank. Moreover, 2 random regression models were applied. Variance components were estimated by restricted maximum likelihood method. The heritability estimates ranged from 0.11 to 0.24. The genetic correlation estimates between TD obtained with the PC2 model were higher than those obtained with the MV model, especially on adjacent test-days at the end of lactation close to unity. The results indicate that for the data considered in this study, only 2 principal components are required to summarize the bulk of genetic variation among the 10 traits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To compare gross motor development of preterm infants (PT) without cerebral palsy with healthy full-term (FT) infants, according to Alberta Infant Motor Scale (AIMS); to compare the age of walking between PT and FT; and whether the age of walking in PT is affected by neonatal variables. Methods: Prospective study compared monthly 101 PT and 52 FT, from the first visit, until all AIMS items had been observed. Results: Mean scores were similarity in their progression, except from the eighth to tenth months. FT infants were faster in walking attainment than PT. Birth weight and length and duration of neonatal nursery stay were related to walking delay. Conclusion: Gross motor development between PT and FT were similar, except from the eighth to tenth months of age. PT walked later than FT infants and predictive variables were birth weight and length, and duration of neonatal intensive unit stay.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Most of the instruments available to measure the oral health-related quality of life (OHRQoL) in paediatric populations focus on older children, whereas parental reports are used for very young children. The scale of oral health outcomes for 5-year-old children (SOHO-5) assesses the OHRQoL of very young children through self-reports and parental proxy reports. We aimed to cross-culturally adapt the SOHO-5 to the Brazilian Portuguese language and to assess its reliability and validity. Findings We tested the quality of the cross-cultural adaptation in 2 pilot studies with 40 children aged 5–6 years and their parents. The measurement was tested for reliability and validity on 193 children that attended the paediatric dental screening program at the University of São Paulo. The children were also clinically examined for dental caries. The internal consistency was demonstrated by a Cronbach's alpha coefficient of 0.90 for the children’s self-reports and 0.77 for the parental proxy reports. The test-retest reliability results, which were based on repeated administrations on 159 children, were excellent; the intraclass correlation coefficient was 0.98 for parental and 0.92 for child reports. In general, the construct validity was satisfactory and demonstrated consistent and strong associations between the SOHO-5 and different subjective global ratings of oral health, perceived dental treatment need and overall well-being in both the parental and children’s versions (p < 0.001). The SOHO-5 was also able to clearly discriminate between children with and without a history of dental caries (mean scores: 5.8 and 1.1, respectively; p < 0.001). Conclusion The present study demonstrated that the SOHO-5 exhibits satisfactory psychometric properties and is applicable to 5- to 6-year-old children in Brazil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Mindful-based interventions improve functioning and quality of life in fibromyalgia (FM) patients. The aim of the study is to perform a psychometric analysis of the Spanish version of the Mindful Attention Awareness Scale (MAAS) in a sample of patients diagnosed with FM. Methods The following measures were administered to 251 Spanish patients with FM: the Spanish version of MAAS, the Chronic Pain Acceptance Questionnaire, the Pain Catastrophising Scale, the Injustice Experience Questionnaire, the Psychological Inflexibility in Pain Scale, the Fibromyalgia Impact Questionnaire and the Euroqol. Factorial structure was analysed using Confirmatory Factor Analyses (CFA). Cronbach's α coefficient was calculated to examine internal consistency, and the intraclass correlation coefficient (ICC) was calculated to assess the test-retest reliability of the measures. Pearson’s correlation tests were run to evaluate univariate relationships between scores on the MAAS and criterion variables. Results The MAAS scores in our sample were low (M = 56.7; SD = 17.5). CFA confirmed a two-factor structure, with the following fit indices [sbX2 = 172.34 (p < 0.001), CFI = 0.95, GFI = 0.90, SRMR = 0.05, RMSEA = 0.06. MAAS was found to have high internal consistency (Cronbach’s α = 0.90) and adequate test-retest reliability at a 1–2 week interval (ICC = 0.90). It showed significant and expected correlations with the criterion measures with the exception of the Euroqol (Pearson = 0.15). Conclusion Psychometric properties of the Spanish version of the MAAS in patients with FM are adequate. The dimensionality of the MAAS found in this sample and directions for future research are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] The seminal work of Horn and Schunck [8] is the first variational method for optical flow estimation. It introduced a novel framework where the optical flow is computed as the solution of a minimization problem. From the assumption that pixel intensities do not change over time, the optical flow constraint equation is derived. This equation relates the optical flow with the derivatives of the image. There are infinitely many vector fields that satisfy the optical flow constraint, thus the problem is ill-posed. To overcome this problem, Horn and Schunck introduced an additional regularity condition that restricts the possible solutions. Their method minimizes both the optical flow constraint and the magnitude of the variations of the flow field, producing smooth vector fields. One of the limitations of this method is that, typically, it can only estimate small motions. In the presence of large displacements, this method fails when the gradient of the image is not smooth enough. In this work, we describe an implementation of the original Horn and Schunck method and also introduce a multi-scale strategy in order to deal with larger displacements. For this multi-scale strategy, we create a pyramidal structure of downsampled images and change the optical flow constraint equation with a nonlinear formulation. In order to tackle this nonlinear formula, we linearize it and solve the method iteratively in each scale. In this sense, there are two common approaches: one that computes the motion increment in the iterations, like in ; or the one we follow, that computes the full flow during the iterations, like in. The solutions are incrementally refined ower the scales. This pyramidal structure is a standard tool in many optical flow methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordinating activities in a distributed system is an open research topic. Several models have been proposed to achieve this purpose such as message passing, publish/subscribe, workflows or tuple spaces. We have focused on the latter model, trying to overcome some of its disadvantages. In particular we have applied spatial database techniques to tuple spaces in order to increase their performance when handling a large number of tuples. Moreover, we have studied how structured peer to peer approaches can be applied to better distribute tuples on large networks. Using some of these result, we have developed a tuple space implementation for the Globus Toolkit that can be used by Grid applications as a coordination service. The development of such a service has been quite challenging due to the limitations imposed by XML serialization that have heavily influenced its design. Nevertheless, we were able to complete its implementation and use it to implement two different types of test applications: a completely parallelizable one and a plasma simulation that is not completely parallelizable. Using this last application we have compared the performance of our service against MPI. Finally, we have developed and tested a simple workflow in order to show the versatility of our service.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Questa dissertazione esamina le sfide e i limiti che gli algoritmi di analisi di grafi incontrano in architetture distribuite costituite da personal computer. In particolare, analizza il comportamento dell'algoritmo del PageRank così come implementato in una popolare libreria C++ di analisi di grafi distribuiti, la Parallel Boost Graph Library (Parallel BGL). I risultati qui presentati mostrano che il modello di programmazione parallela Bulk Synchronous Parallel è inadatto all'implementazione efficiente del PageRank su cluster costituiti da personal computer. L'implementazione analizzata ha infatti evidenziato una scalabilità negativa, il tempo di esecuzione dell'algoritmo aumenta linearmente in funzione del numero di processori. Questi risultati sono stati ottenuti lanciando l'algoritmo del PageRank della Parallel BGL su un cluster di 43 PC dual-core con 2GB di RAM l'uno, usando diversi grafi scelti in modo da facilitare l'identificazione delle variabili che influenzano la scalabilità. Grafi rappresentanti modelli diversi hanno dato risultati differenti, mostrando che c'è una relazione tra il coefficiente di clustering e l'inclinazione della retta che rappresenta il tempo in funzione del numero di processori. Ad esempio, i grafi Erdős–Rényi, aventi un basso coefficiente di clustering, hanno rappresentato il caso peggiore nei test del PageRank, mentre i grafi Small-World, aventi un alto coefficiente di clustering, hanno rappresentato il caso migliore. Anche le dimensioni del grafo hanno mostrato un'influenza sul tempo di esecuzione particolarmente interessante. Infatti, si è mostrato che la relazione tra il numero di nodi e il numero di archi determina il tempo totale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flood disasters are a major cause of fatalities and economic losses, and several studies indicate that global flood risk is currently increasing. In order to reduce and mitigate the impact of river flood disasters, the current trend is to integrate existing structural defences with non structural measures. This calls for a wider application of advanced hydraulic models for flood hazard and risk mapping, engineering design, and flood forecasting systems. Within this framework, two different hydraulic models for large scale analysis of flood events have been developed. The two models, named CA2D and IFD-GGA, adopt an integrated approach based on the diffusive shallow water equations and a simplified finite volume scheme. The models are also designed for massive code parallelization, which has a key importance in reducing run times in large scale and high-detail applications. The two models were first applied to several numerical cases, to test the reliability and accuracy of different model versions. Then, the most effective versions were applied to different real flood events and flood scenarios. The IFD-GGA model showed serious problems that prevented further applications. On the contrary, the CA2D model proved to be fast and robust, and able to reproduce 1D and 2D flow processes in terms of water depth and velocity. In most applications the accuracy of model results was good and adequate to large scale analysis. Where complex flow processes occurred local errors were observed, due to the model approximations. However, they did not compromise the correct representation of overall flow processes. In conclusion, the CA model can be a valuable tool for the simulation of a wide range of flood event types, including lowland and flash flood events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A year of satellite-borne lidar CALIOP data is analyzed and statistics on occurrence and distribution of bulk properties of cirri are provided. The relationship between environmental and cloud physical parameters and the shape of the backscatter profile (BSP) is investigated. It is found that CALIOP BSP is mainly affected by cloud geometrical thickness while only minor impacts can be attributed to other quantities such as optical depth or temperature. To fit mean BSPs as functions of geometrical thickness and position within the cloud layer, polynomial functions are provided. It is demonstrated that, under realistic hypotheses, the mean BSP is linearly proportional to the IWC profile. The IWC parameterization is included into the RT-RET retrieval algorithm, that is exploited to analyze infrared radiance measurements in presence of cirrus clouds during the ECOWAR field campaign. Retrieved microphysical and optical properties of the observed cloud are used as input parameters in a forward RT simulation run over the 100-1100 cm-1 spectral interval and compared with interferometric data to test the ability of the current single scattering properties database of ice crystal to reproduce realistic optical features. Finally a global scale investigation of cirrus clouds is performed by developing a collocation algorithm that exploits satellite data from multiple sensors (AIRS, CALIOP, MODIS). The resulting data set is utilized to test a new infrared hyperspectral retrieval algorithm. Retrieval products are compared to data and in particular the cloud top height (CTH) product is considered for this purpose. A better agreement of the retrieval with the CALIOP CTH than MODIS is found, even if some cases of underestimation and overestimation are observed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Vrancea region, at the south-eastern bend of the Carpathian Mountains in Romania, represents one of the most puzzling seismically active zones of Europe. Beside some shallow seismicity spread across the whole Romanian territory, Vrancea is the place of an intense seismicity with the presence of a cluster of intermediate-depth foci placed in a narrow nearly vertical volume. Although large-scale mantle seismic tomographic studies have revealed the presence of a narrow, almost vertical, high-velocity body in the upper mantle, the nature and the geodynamic of this deep intra-continental seismicity is still questioned. High-resolution seismic tomography could help to reveal more details in the subcrustal structure of Vrancea. Recent developments in computational seismology as well as the availability of parallel computing now allow to potentially retrieve more information out of seismic waveforms and to reach such high-resolution models. This study was aimed to evaluate the application of a full waveform inversion tomography at regional scale for the Vrancea lithosphere using data from the 1999 six months temporary local network CALIXTO. Starting from a detailed 3D Vp, Vs and density model, built on classical travel-time tomography together with gravity data, I evaluated the improvements obtained with the full waveform inversion approach. The latter proved to be highly problem dependent and highly computational expensive. The model retrieved after the first two iterations does not show large variations with respect to the initial model but remains in agreement with previous tomographic models. It presents a well-defined downgoing slab shape high velocity anomaly, composed of a N-S horizontal anomaly in the depths between 40 and 70km linked to a nearly vertical NE-SW anomaly from 70 to 180km.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diese Arbeit stellt eine ausführliche Studie fundamentaler Eigenschaften der Kalzit CaCO3(10.4) und verwandter Mineraloberflächen dar, welche nicht nur durch die Verwendung von Nichtkontakt Rasterkraftmikroskopie, sondern hauptsächlich durch die Messung von Kraftfeldern ermöglicht wurde. Die absolute Oberflächenorientierung sowie der hierfür zugrundeliegende Prozess auf atomarer Skala konnten erfolgreich für die Kalzit (10.4) Oberfläche identifiziert werden.rnDie Adsorption chiraler Moleküle auf Kalzit ist relevant im Bereich der Biomineralisation, was ein Verständnis der Oberflächensymmetrie unumgänglich macht. Die Messung des Oberflächenkraftfeldes auf atomarer Ebene ist hierfür ein zentraler Aspekt. Eine solche Kraftkarte beleuchtet nicht nur die für die Biomineralisation wichtige Wechselwirkung der Oberfläche mit Molekülen, sondern enthält auch die Möglichkeit, Prozesse auf atomarer Skala und damit Oberflächeneigenschaften zu identifizieren.rnDie Einführung eines höchst flexiblen Messprotokolls gewährleistet die zuverlässige und kommerziell nicht erhältliche Messung des Oberflächenkraftfeldes. Die Konversion der rohen ∆f Daten in die vertikale Kraft Fz ist jedoch kein trivialer Vorgang, insbesondere wenn Glätten der Daten in Frage kommt. Diese Arbeit beschreibt detailreich, wie Fz korrekt für die experimentellen Bedingungen dieser Arbeit berechnet werden können. Weiterhin ist beschrieben, wie Lateralkräfte Fy und Dissipation Γ erhalten wurden, um das volle Potential dieser Messmethode auszureizen.rnUm Prozesse auf atomarer Skala auf Oberflächen zu verstehen sind die kurzreichweitigen, chemischen Kräfte Fz,SR von größter Wichtigkeit. Langreichweitige Beiträge müssen hierzu an Fz angefittet und davon abgezogen werden. Dies ist jedoch eine fehleranfällige Aufgabe, die in dieser Arbeit dadurch gemeistert werden konnte, dass drei unabhängige Kriterien gefunden wurden, die den Beginn zcut von Fz,SR bestimmen, was für diese Aufgabe von zentraler Bedeutung ist. Eine ausführliche Fehleranalyse zeigt, dass als Kriterium die Abweichung der lateralen Kräfte voneinander vertrauenswürdige Fz,SR liefert. Dies ist das erste Mal, dass in einer Studie ein Kriterium für die Bestimmung von zcut gegeben werden konnte, vervollständigt mit einer detailreichen Fehleranalyse.rnMit der Kenntniss von Fz,SR und Fy war es möglich, eine der fundamentalen Eigenschaften der CaCO3(10.4) Oberfläche zu identifizieren: die absolute Oberflächenorientierung. Eine starke Verkippung der abgebildeten Objekte