851 resultados para Full-time Schools
Resumo:
PURPOSE: To describe the Brainstem Auditory Evoked Potential (BAEP) results of full-term small-for-gestational-age newborns, comparing them to the results of full-term appropriate-for-gestational-age newborns, in order to verify whether the small-for-gestational-age condition is a risk indicator for retrocochlear hearing impairment. METHODS: This multicentric prospective cross-sectional study assessed 86 full-term newborns - 47 small- (Study Group) and 39 appropriate-for-gestational-age (Control Group - of both genders, with ages between 2 and 12 days. Newborns with presence of transient evoked otoacoustic emissions and type A tympanometry were included in the study. Quantitative analysis was based on the mean and standard deviation of the absolute latencies of waves I, III and V and interpeak intervals I-III, III-V and I-V, for each group. For qualitative analysis, the BAEP results were classified as normal or altered by analyzing these data considering the age range of the newborn at the time of testing. RESULTS: In the Study Group, nine of the 18 (38%) subjects with altered BAEP results had the condition of small-for-gestational-age as the only risk factor for hearing impairments. In the Control Group, seven (18%) had altered results. Female subjects from the Study Group tended to present more central alterations. In the Control Group, the male group tended to have more alterations. CONCLUSION: Full-term children born small or appropriate for gestational age might present transitory or permanent central hearing impairments, regardless of the presence of risk indicators.
Resumo:
We studied the energy and frequency dependence of the Fourier time lags and intrinsic coherence of the kilohertz quasi-periodic oscillations (kHz QPOs) in the neutron-star lowmass X-ray binaries 4U 1608−52 and 4U 1636−53, using a large data set obtained with the Rossi X-ray Timing Explorer. We confirmed that, in both sources, the time lags of the lower kHz QPO are soft and their magnitude increases with energy. We also found that: (i) In 4U 1636−53, the soft lags of the lower kHz QPO remain constant at∼30 μs in the QPO frequency range 500–850 Hz, and decrease to ∼10 μs when the QPO frequency increases further. In 4U 1608−52, the soft lags of the lower kHz QPO remain constant at 40 μs up to 800 Hz, the highest frequency reached by this QPO in our data. (ii) In both sources, the time lags of the upper kHz QPO are hard, independent of energy or frequency and inconsistent with the soft lags of the lower kHz QPO. (iii) In both sources the intrinsic coherence of the lower kHz QPO remains constant at ∼0.6 between 5 and 12 keV, and drops to zero above that energy. The intrinsic coherence of the upper kHz QPO is consistent with being zero across the full energy range. (iv) In 4U 1636−53, the intrinsic coherence of the lower kHz QPO increases from ∼0 at ∼600 Hz to ∼1, and it decreases to ∼0.5 at 920 Hz; in 4U 1608−52, the intrinsic coherence is consistent with the same trend. (v) In both sources the intrinsic coherence of the upper kHz QPO is consistent with zero over the full frequency range of the QPO, except in 4U 1636−53 between 700 and 900 Hz where the intrinsic coherence marginally increases. We discuss our results in the context of scenarios in which the soft lags are either due to reflection off the accretion disc or up-/down-scattering in a hot medium close to the neutron star. We finally explore the connection between, on one hand the time lags and the intrinsic coherence of the kHz QPOs, and on the other the QPOs’ amplitude and quality factor in these two sources.
Resumo:
In this thesis is studied the long-term behaviour of steel reinforced slabs paying particular attention to the effects due to shrinkage and creep. Despite the universal popularity of using this kind of slabs for simply construction floors, the major world codes focus their attention in a design based on the ultimate limit state, restraining the exercise limit state to a simply verification after the design. For Australia, on the contrary, this is not true. In fact, since this country is not subjected to seismic effects, the main concern is related to the long-term behaviour of the structure. Even if there are a lot of studies about long-term effects of shrinkage and creep, up to date, there are not so many studies concerning the behaviour of slabs with a cracked cross section and how shrinkage and creep influence it. For this reason, a series of ten full scale reinforced slabs was prepared and monitored under laboratory conditions to investigate this behaviour. A wide range of situations is studied in order to cover as many cases as possible, as for example the use of a fog room able to reproduce an environment of 100% humidity. The results show how there is a huge difference in terms of deflections between the case of slabs which are subjected to both shrinkage and creep effects soon after the partial cracking of the cross section, and the case of slabs which have already experienced shrinkage effects for several weeks, when the section has not still cracked, and creep effects only after the cracking.
Resumo:
Among the experimental methods commonly used to define the behaviour of a full scale system, dynamic tests are the most complete and efficient procedures. A dynamic test is an experimental process, which would define a set of characteristic parameters of the dynamic behaviour of the system, such as natural frequencies of the structure, mode shapes and the corresponding modal damping values associated. An assessment of these modal characteristics can be used both to verify the theoretical assumptions of the project, to monitor the performance of the structural system during its operational use. The thesis is structured in the following chapters: The first introductive chapter recalls some basic notions of dynamics of structure, focusing the discussion on the problem of systems with multiply degrees of freedom (MDOF), which can represent a generic real system under study, when it is excited with harmonic force or in free vibration. The second chapter is entirely centred on to the problem of dynamic identification process of a structure, if it is subjected to an experimental test in forced vibrations. It first describes the construction of FRF through classical FFT of the recorded signal. A different method, also in the frequency domain, is subsequently introduced; it allows accurately to compute the FRF using the geometric characteristics of the ellipse that represents the direct input-output comparison. The two methods are compared and then the attention is focused on some advantages of the proposed methodology. The third chapter focuses on the study of real structures when they are subjected to experimental test, where the force is not known, like in an ambient or impact test. In this analysis we decided to use the CWT, which allows a simultaneous investigation in the time and frequency domain of a generic signal x(t). The CWT is first introduced to process free oscillations, with excellent results both in terms of frequencies, dampings and vibration modes. The application in the case of ambient vibrations defines accurate modal parameters of the system, although on the damping some important observations should be made. The fourth chapter is still on the problem of post processing data acquired after a vibration test, but this time through the application of discrete wavelet transform (DWT). In the first part the results obtained by the DWT are compared with those obtained by the application of CWT. Particular attention is given to the use of DWT as a tool for filtering the recorded signal, in fact in case of ambient vibrations the signals are often affected by the presence of a significant level of noise. The fifth chapter focuses on another important aspect of the identification process: the model updating. In this chapter, starting from the modal parameters obtained from some environmental vibration tests, performed by the University of Porto in 2008 and the University of Sheffild on the Humber Bridge in England, a FE model of the bridge is defined, in order to define what type of model is able to capture more accurately the real dynamic behaviour of the bridge. The sixth chapter outlines the necessary conclusions of the presented research. They concern the application of a method in the frequency domain in order to evaluate the modal parameters of a structure and its advantages, the advantages in applying a procedure based on the use of wavelet transforms in the process of identification in tests with unknown input and finally the problem of 3D modeling of systems with many degrees of freedom and with different types of uncertainty.
Resumo:
Several countries have acquired, over the past decades, large amounts of area covering Airborne Electromagnetic data. Contribution of airborne geophysics has dramatically increased for both groundwater resource mapping and management proving how those systems are appropriate for large-scale and efficient groundwater surveying. We start with processing and inversion of two AEM dataset from two different systems collected over the Spiritwood Valley Aquifer area, Manitoba, Canada respectively, the AeroTEM III (commissioned by the Geological Survey of Canada in 2010) and the “Full waveform VTEM” dataset, collected and tested over the same survey area, during the fall 2011. We demonstrate that in the presence of multiple datasets, either AEM and ground data, due processing, inversion, post-processing, data integration and data calibration is the proper approach capable of providing reliable and consistent resistivity models. Our approach can be of interest to many end users, ranging from Geological Surveys, Universities to Private Companies, which are often proprietary of large geophysical databases to be interpreted for geological and\or hydrogeological purposes. In this study we deeply investigate the role of integration of several complimentary types of geophysical data collected over the same survey area. We show that data integration can improve inversions, reduce ambiguity and deliver high resolution results. We further attempt to use the final, most reliable output resistivity models as a solid basis for building a knowledge-driven 3D geological voxel-based model. A voxel approach allows a quantitative understanding of the hydrogeological setting of the area, and it can be further used to estimate the aquifers volumes (i.e. potential amount of groundwater resources) as well as hydrogeological flow model prediction. In addition, we investigated the impact of an AEM dataset towards hydrogeological mapping and 3D hydrogeological modeling, comparing it to having only a ground based TEM dataset and\or to having only boreholes data.
Resumo:
The Vrancea region, at the south-eastern bend of the Carpathian Mountains in Romania, represents one of the most puzzling seismically active zones of Europe. Beside some shallow seismicity spread across the whole Romanian territory, Vrancea is the place of an intense seismicity with the presence of a cluster of intermediate-depth foci placed in a narrow nearly vertical volume. Although large-scale mantle seismic tomographic studies have revealed the presence of a narrow, almost vertical, high-velocity body in the upper mantle, the nature and the geodynamic of this deep intra-continental seismicity is still questioned. High-resolution seismic tomography could help to reveal more details in the subcrustal structure of Vrancea. Recent developments in computational seismology as well as the availability of parallel computing now allow to potentially retrieve more information out of seismic waveforms and to reach such high-resolution models. This study was aimed to evaluate the application of a full waveform inversion tomography at regional scale for the Vrancea lithosphere using data from the 1999 six months temporary local network CALIXTO. Starting from a detailed 3D Vp, Vs and density model, built on classical travel-time tomography together with gravity data, I evaluated the improvements obtained with the full waveform inversion approach. The latter proved to be highly problem dependent and highly computational expensive. The model retrieved after the first two iterations does not show large variations with respect to the initial model but remains in agreement with previous tomographic models. It presents a well-defined downgoing slab shape high velocity anomaly, composed of a N-S horizontal anomaly in the depths between 40 and 70km linked to a nearly vertical NE-SW anomaly from 70 to 180km.
Resumo:
Geometric packing problems may be formulated mathematically as constrained optimization problems. But finding a good solution is a challenging task. The more complicated the geometry of the container or the objects to be packed, the more complex the non-penetration constraints become. In this work we propose the use of a physics engine that simulates a system of colliding rigid bodies. It is a tool to resolve interpenetration conflicts and to optimize configurations locally. We develop an efficient and easy-to-implement physics engine that is specialized for collision detection and contact handling. In succession of the development of this engine a number of novel algorithms for distance calculation and intersection volume were designed and imple- mented, which are presented in this work. They are highly specialized to pro- vide fast responses for cuboids and triangles as input geometry whereas the concepts they are based on can easily be extended to other convex shapes. Especially noteworthy in this context is our ε-distance algorithm - a novel application that is not only very robust and fast but also compact in its im- plementation. Several state-of-the-art third party implementations are being presented and we show that our implementations beat them in runtime and robustness. The packing algorithm that lies on top of the physics engine is a Monte Carlo based approach implemented for packing cuboids into a container described by a triangle soup. We give an implementation for the SAE J1100 variant of the trunk packing problem. We compare this implementation to several established approaches and we show that it gives better results in faster time than these existing implementations.
Resumo:
Zeitreihen sind allgegenwärtig. Die Erfassung und Verarbeitung kontinuierlich gemessener Daten ist in allen Bereichen der Naturwissenschaften, Medizin und Finanzwelt vertreten. Das enorme Anwachsen aufgezeichneter Datenmengen, sei es durch automatisierte Monitoring-Systeme oder integrierte Sensoren, bedarf außerordentlich schneller Algorithmen in Theorie und Praxis. Infolgedessen beschäftigt sich diese Arbeit mit der effizienten Berechnung von Teilsequenzalignments. Komplexe Algorithmen wie z.B. Anomaliedetektion, Motivfabfrage oder die unüberwachte Extraktion von prototypischen Bausteinen in Zeitreihen machen exzessiven Gebrauch von diesen Alignments. Darin begründet sich der Bedarf nach schnellen Implementierungen. Diese Arbeit untergliedert sich in drei Ansätze, die sich dieser Herausforderung widmen. Das umfasst vier Alignierungsalgorithmen und ihre Parallelisierung auf CUDA-fähiger Hardware, einen Algorithmus zur Segmentierung von Datenströmen und eine einheitliche Behandlung von Liegruppen-wertigen Zeitreihen.rnrnDer erste Beitrag ist eine vollständige CUDA-Portierung der UCR-Suite, die weltführende Implementierung von Teilsequenzalignierung. Das umfasst ein neues Berechnungsschema zur Ermittlung lokaler Alignierungsgüten unter Verwendung z-normierten euklidischen Abstands, welches auf jeder parallelen Hardware mit Unterstützung für schnelle Fouriertransformation einsetzbar ist. Des Weiteren geben wir eine SIMT-verträgliche Umsetzung der Lower-Bound-Kaskade der UCR-Suite zur effizienten Berechnung lokaler Alignierungsgüten unter Dynamic Time Warping an. Beide CUDA-Implementierungen ermöglichen eine um ein bis zwei Größenordnungen schnellere Berechnung als etablierte Methoden.rnrnAls zweites untersuchen wir zwei Linearzeit-Approximierungen für das elastische Alignment von Teilsequenzen. Auf der einen Seite behandeln wir ein SIMT-verträgliches Relaxierungschema für Greedy DTW und seine effiziente CUDA-Parallelisierung. Auf der anderen Seite führen wir ein neues lokales Abstandsmaß ein, den Gliding Elastic Match (GEM), welches mit der gleichen asymptotischen Zeitkomplexität wie Greedy DTW berechnet werden kann, jedoch eine vollständige Relaxierung der Penalty-Matrix bietet. Weitere Verbesserungen umfassen Invarianz gegen Trends auf der Messachse und uniforme Skalierung auf der Zeitachse. Des Weiteren wird eine Erweiterung von GEM zur Multi-Shape-Segmentierung diskutiert und auf Bewegungsdaten evaluiert. Beide CUDA-Parallelisierung verzeichnen Laufzeitverbesserungen um bis zu zwei Größenordnungen.rnrnDie Behandlung von Zeitreihen beschränkt sich in der Literatur in der Regel auf reellwertige Messdaten. Der dritte Beitrag umfasst eine einheitliche Methode zur Behandlung von Liegruppen-wertigen Zeitreihen. Darauf aufbauend werden Distanzmaße auf der Rotationsgruppe SO(3) und auf der euklidischen Gruppe SE(3) behandelt. Des Weiteren werden speichereffiziente Darstellungen und gruppenkompatible Erweiterungen elastischer Maße diskutiert.
Resumo:
During the sixteenth and seventeenth centuries, the excise taxes (Ungeld) paid by town residents on the consumption of beer, wine, mead and brandy represented the single most important source of civic revenue for many German cities. In a crisis, these taxes could spike to 70-80% of civic income. This paper examines civic budgets and 'behind-the-scenes' deliberations in a sample of towns in southern Germany in order to illuminate how decisions affecting consumer taxes were made. Even during the sobriety movements of the Reformation and post-Reformation period, tax income from drinkers remained attractive to city leaders because the bulk of the excise tax burden could easily be shifted away from privileged members of society and placed on the population at large. At the same time, governments had to maintain a careful balance between what they needed in order to govern and what the consumer market could bear, for high taxes on drinks were also targeted in many popular revolts. This led to nimble politicking by those responsible for tax decisions. Drink taxes were introduced, raised, lowered and otherwise manipulated based not only on shifting fashions and tastes but also on the degree of economic stress faced by the community. Where civic rulers were successful in striking the right balance, the rewards were considerable. The income from drink sales was a major factor in how the cities of the Empire survived the wars and other crises of the early modern period without going into so much debt that they lost their independence.
Resumo:
During the sixteenth and seventeenth centuries, the excise taxes (Ungeld) paid by town residents on the consumption of beer, wine, mead and brandy represented the single most important source of civic revenue for many German cities. In a crisis, these taxes could spike to 70–80% of civic income. This paper examines civic budgets and ‘behind-the-scenes’ deliberations in a sample of towns in southern Germany in order to illuminate how decisions affecting consumer taxes were made. Even during the sobriety movements of the Reformation and post-Reformation period, tax income from drinkers remained attractive to city leaders because the bulk of the excise tax burden could easily be shifted away from privileged members of society and placed on the population at large. At the same time, governments had to maintain a careful balance between what they needed in order to govern and what the consumer market could bear, for high taxes on drinks were also targeted in many popular revolts. This led to nimble politicking by those responsible for tax decisions. Drink taxes were introduced, raised, lowered and otherwise manipulated based not only on shifting fashions and tastes but also on the degree of economic stress faced by the community. Where civic rulers were successful in striking the right balance, the rewards were considerable. The income from drink sales was a major factor in how the cities of the Empire survived the wars and other crises of the early modern period without going into so much debt that they lost their independence.
Resumo:
BACKGROUND: Randomized controlled trials (RCTs) are the best tool to evaluate the effectiveness of clinical interventions. The Consolidated Standards for Reporting Trials (CONSORT) statement was introduced in 1996 to improve reporting of RCTs. We aimed to determine the extent of ambiguity and reporting quality as assessed by adherence to the CONSORT statement in published reports of RCTs involving patients with Hodgkin lymphoma from 1966 through 2002. METHODS: We analyzed 242 published full-text reports of RCTs in patients with Hodgkin lymphoma. Quality of reporting was assessed using a 14-item questionnaire based on the CONSORT checklist. Reporting was studied in two pre-CONSORT periods (1966-1988 and 1989-1995) and one post-CONSORT period (1996-2002). RESULTS: Only six of the 14 items were addressed in 75% or more of the studies in all three time periods. Most items that are necessary to assess the methodologic quality of a study were reported by fewer than 20% of the studies. Improvements over time were seen for some items, including the description of statistics methods used, reporting of primary research outcomes, performance of power calculations, method of randomization and concealment allocation, and having performed intention-to-treat analysis. CONCLUSIONS: Despite recent improvements, reporting levels of CONSORT items in RCTs involving patients with Hodgkin lymphoma remain unsatisfactory. Further concerted action by journal editors, learned societies, and medical schools is necessary to make authors even more aware of the need to improve the reporting RCTs in medical journals to allow assessment of validity of published clinical research.
Resumo:
In many clinical trials to evaluate treatment efficacy, it is believed that there may exist latent treatment effectiveness lag times after which medical procedure or chemical compound would be in full effect. In this article, semiparametric regression models are proposed and studied to estimate the treatment effect accounting for such latent lag times. The new models take advantage of the invariance property of the additive hazards model in marginalizing over random effects, so parameters in the models are easy to be estimated and interpreted, while the flexibility without specifying baseline hazard function is kept. Monte Carlo simulation studies demonstrate the appropriateness of the proposed semiparametric estimation procedure. Data collected in the actual randomized clinical trial, which evaluates the effectiveness of biodegradable carmustine polymers for treatment of recurrent brain tumors, are analyzed.
Resumo:
Partial or full life-cycle tests are needed to assess the potential of endocrine-disrupting compounds (EDCs) to adversely affect development and reproduction of fish. Small fish species such as zebrafish, Danio rerio, are under consideration as model organisms for appropriate test protocols. The present study examines how reproductive effects resulting from exposure of zebrafish to the synthetic estrogen 17alpha-ethinylestradiol (EE2) vary with concentration (0.05 to 10 ng EE2 L(-1), nominal), and with timing/duration of exposure (partial life-cycle, full life-cycle, and two-generation exposure). Partial life-cycle exposure of the parental (F1) generation until completion of gonad differentiation (0-75 d postfertilization, dpf) impaired juvenile growth, time to sexual maturity, adult fecundity (egg production/female/day), and adult fertilization success at 1.1 ng EE2 L(-1) and higher. Lifelong exposure of the F1 generation until 177 dpf resulted in lowest observed effect concentrations (LOECs) for time to sexual maturity, fecundity, and fertilization success identical to those of the developmental test (0-75 dpf), but the slope of the concentration-response curve was steeper. Reproduction of zebrafish was completely inhibited at 9.3 ng EE2 L(-1), and this was essentially irreversible as a 3-mo depuration restored fertilization success to only a very low rate. Accordingly, elevated endogenous vitellogenin (VTG) synthesis and degenerative changes in gonad morphology persisted in depurated zebrafish. Full life-cycle exposure of the filial (F2) generation until 162 dpf impaired growth, delayed onset of spawning and reduced fecundity and fertilization success at 2.0 ng EE2 L(-1). In conclusion, results show that the impact of estrogenic agents on zebrafish sexual development and reproductive functions as well as the reversibility of effects, varies with exposure concentration (reversibility at < or = 1.1 ng EE2 L(-1) and irreversibility at 9.3 ng EE2 L(-1)), and between partial and full life-cycle exposure (exposure to 10 ng EE2 L(-1) during critical period exerted no permanent effect on sexual differentiation, but life-cycle exposure did).
Resumo:
BACKGROUND: Abstracts of presentations at scientific meetings are usually available only in conference proceedings. If subsequent full publication of abstract results is based on the magnitude or direction of study results, publication bias may result. Publication bias, in turn, creates problems for those conducting systematic reviews or relying on the published literature for evidence. OBJECTIVES: To determine the rate at which abstract results are subsequently published in full, and the time between meeting presentation and full publication. To assess the association between study characteristics and full publication. SEARCH STRATEGY: We searched MEDLINE, EMBASE, The Cochrane Library, Science Citation Index, reference lists, and author files. Date of most recent search: June 2003. SELECTION CRITERIA: We included all reports that examined the subsequent full publication rate of biomedical results initially presented as abstracts or in summary form. Follow-up of abstracts had to be at least two years. DATA COLLECTION AND ANALYSIS: Two reviewers extracted data. We calculated the weighted mean full publication rate and time to full publication. Dichotomous variables were analyzed using relative risk and random effects models. We assessed time to publication using Kaplan-Meier survival analyses. MAIN RESULTS: Combining data from 79 reports (29,729 abstracts) resulted in a weighted mean full publication rate of 44.5% (95% confidence interval (CI) 43.9 to 45.1). Survival analyses resulted in an estimated publication rate at 9 years of 52.6% for all studies, 63.1% for randomized or controlled clinical trials, and 49.3% for other types of study designs.'Positive' results defined as any 'significant' result showed an association with full publication (RR = 1.30; CI 1.14 to 1.47), as did 'positive' results defined as a result favoring the experimental treatment (RR =1.17; CI 1.02 to 1.35), and 'positive' results emanating from randomized or controlled clinical trials (RR = 1.18, CI 1.07 to 1.30).Other factors associated with full publication include oral presentation (RR = 1.28; CI 1.09 to 1.49); acceptance for meeting presentation (RR = 1.78; CI 1.50 to 2.12); randomized trial study design (RR = 1.24; CI 1.14 to 1.36); and basic research (RR = 0.79; CI 0.70 to 0.89). Higher quality of abstracts describing randomized or controlled clinical trials was also associated with full publication (RR = 1.30, CI 1.00 to 1.71). AUTHORS' CONCLUSIONS: Only 63% of results from abstracts describing randomized or controlled clinical trials are published in full. 'Positive' results were more frequently published than not 'positive' results.
Resumo:
Knowledge of the time interval from death (post-mortem interval, PMI) has an enormous legal, criminological and psychological impact. Aiming to find an objective method for the determination of PMIs in forensic medicine, 1H-MR spectroscopy (1H-MRS) was used in a sheep head model to follow changes in brain metabolite concentrations after death. Following the characterization of newly observed metabolites (Ith et al., Magn. Reson. Med. 2002; 5: 915-920), the full set of acquired spectra was analyzed statistically to provide a quantitative estimation of PMIs with their respective confidence limits. In a first step, analytical mathematical functions are proposed to describe the time courses of 10 metabolites in the decomposing brain up to 3 weeks post-mortem. Subsequently, the inverted functions are used to predict PMIs based on the measured metabolite concentrations. Individual PMIs calculated from five different metabolites are then pooled, being weighted by their inverse variances. The predicted PMIs from all individual examinations in the sheep model are compared with known true times. In addition, four human cases with forensically estimated PMIs are compared with predictions based on single in situ MRS measurements. Interpretation of the individual sheep examinations gave a good correlation up to 250 h post-mortem, demonstrating that the predicted PMIs are consistent with the data used to generate the model. Comparison of the estimated PMIs with the forensically determined PMIs in the four human cases shows an adequate correlation. Current PMI estimations based on forensic methods typically suffer from uncertainties in the order of days to weeks without mathematically defined confidence information. In turn, a single 1H-MRS measurement of brain tissue in situ results in PMIs with defined and favorable confidence intervals in the range of hours, thus offering a quantitative and objective method for the determination of PMIs.