896 resultados para dynamic and static collection
Resumo:
Structures experience various types of loads along their lifetime, which can be either static or dynamic and may be associated to phenomena of corrosion and chemical attack, among others. As a consequence, different types of structural damage can be produced; the deteriorated structure may have its capacity affected, leading to excessive vibration problems or even possible failure. It is very important to develop methods that are able to simultaneously detect the existence of damage and to quantify its extent. In this paper the authors propose a method to detect and quantify structural damage, using response transmissibilities measured along the structure. Some numerical simulations are presented and a comparison is made with results using frequency response functions. Experimental tests are also undertaken to validate the proposed technique. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
PhD thesis in Educational Sciences (specialization in Politics of Education).
Resumo:
We study the asymmetric and dynamic dependence between financial assets and demonstrate, from the perspective of risk management, the economic significance of dynamic copula models. First, we construct stock and currency portfolios sorted on different characteristics (ex ante beta, coskewness, cokurtosis and order flows), and find substantial evidence of dynamic evolution between the high beta (respectively, coskewness, cokurtosis and order flow) portfolios and the low beta (coskewness, cokurtosis and order flow) portfolios. Second, using three different dependence measures, we show the presence of asymmetric dependence between these characteristic-sorted portfolios. Third, we use a dynamic copula framework based on Creal et al. (2013) and Patton (2012) to forecast the portfolio Value-at-Risk of long-short (high minus low) equity and FX portfolios. We use several widely used univariate and multivariate VaR models for the purpose of comparison. Backtesting our methodology, we find that the asymmetric dynamic copula models provide more accurate forecasts, in general, and, in particular, perform much better during the recent financial crises, indicating the economic significance of incorporating dynamic and asymmetric dependence in risk management.
Resumo:
We investigate the dynamic and asymmetric dependence structure between equity portfolios from the US and UK. We demonstrate the statistical significance of dynamic asymmetric copula models in modelling and forecasting market risk. First, we construct “high-minus-low" equity portfolios sorted on beta, coskewness, and cokurtosis. We find substantial evidence of dynamic and asymmetric dependence between characteristic-sorted portfolios. Second, we consider a dynamic asymmetric copula model by combining the generalized hyperbolic skewed t copula with the generalized autoregressive score (GAS) model to capture both the multivariate non-normality and the dynamic and asymmetric dependence between equity portfolios. We demonstrate its usefulness by evaluating the forecasting performance of Value-at-Risk and Expected Shortfall for the high-minus-low portfolios. From back-testing, e find consistent and robust evidence that our dynamic asymmetric copula model provides the most accurate forecasts, indicating the importance of incorporating the dynamic and asymmetric dependence structure in risk management.
Resumo:
In the Ballabeina study, we investigated age- and BMI-group-related differences in aerobic fitness (20 m shuttle run), agility (obstacle course), dynamic (balance beam) and static balance (balance platform), and physical activity (PA, accelerometers) in 613 children (M age = 5.1 years, SD = 0.6). Normal weight (NW) children performed better than overweight (OW) children in aerobic fitness, agility, and dynamic balance (all p <.001), while OWchildren had a better static balance (p < .001). BMI-group-related differences in aerobic fitness and agility were larger in older children (p for interaction with age = .01) in favor of the NW children. PA did not differ between NW and OW (p > or = .1), but did differ between NW and obese children (p < .05). BMI-group-related differences in physical fitness can already be present in preschool-age children.
Resumo:
Objective: To evaluate the safety of the performance of the traditional and protected collection techniques of tracheal aspirate and to identify qualitative and quantitative agreement of the results of microbiological cultures between the techniques. Method: Clinical, prospective, comparative, single-blind research. The sample was composed of 54 patients of >18 years of age, undergoing invasive mechanical ventilation for a period of ≥48 hours and with suspected Ventilator Associated Pneumonia. The two techniques were implemented in the same patient, one immediately after the other, with an order of random execution, according to randomization by specialized software. Results: No significant events occurred oxygen desaturation, hemodynamic instability or tracheobronchial hemorrhage (p<0.05) and, although there were differences in some strains, there was qualitative and quantitative agreement between the techniques (p<0.001). Conclusion: Utilization of the protected technique provided no advantage over the traditional and execution of both techniques was safe for the patient.
Resumo:
Perceptual maps have been used for decades by market researchers to illuminatethem about the similarity between brands in terms of a set of attributes, to position consumersrelative to brands in terms of their preferences, or to study how demographic and psychometricvariables relate to consumer choice. Invariably these maps are two-dimensional and static. Aswe enter the era of electronic publishing, the possibilities for dynamic graphics are opening up.We demonstrate the usefulness of introducing motion into perceptual maps through fourexamples. The first example shows how a perceptual map can be viewed in three dimensions,and the second one moves between two analyses of the data that were collected according todifferent protocols. In a third example we move from the best view of the data at the individuallevel to one which focuses on between-group differences in aggregated data. A final exampleconsiders the case when several demographic variables or market segments are available foreach respondent, showing an animation with increasingly detailed demographic comparisons.These examples of dynamic maps use several data sets from marketing and social scienceresearch.
Resumo:
In response to the mandate on Load and Resistance Factor Design (LRFD) implementations by the Federal Highway Administration (FHWA) on all new bridge projects initiated after October 1, 2007, the Iowa Highway Research Board (IHRB) sponsored these research projects to develop regional LRFD recommendations. The LRFD development was performed using the Iowa Department of Transportation (DOT) Pile Load Test database (PILOT). To increase the data points for LRFD development, develop LRFD recommendations for dynamic methods, and validate the results ofLRFD calibration, 10 full-scale field tests on the most commonly used steel H-piles (e.g., HP 10 x 42) were conducted throughout Iowa. Detailed in situ soil investigations were carried out, push-in pressure cells were installed, and laboratory soil tests were performed. Pile responses during driving, at the end of driving (EOD), and at re-strikes were monitored using the Pile Driving Analyzer (PDA), following with the CAse Pile Wave Analysis Program (CAPWAP) analysis. The hammer blow counts were recorded for Wave Equation Analysis Program (WEAP) and dynamic formulas. Static load tests (SLTs) were performed and the pile capacities were determined based on the Davisson’s criteria. The extensive experimental research studies generated important data for analytical and computational investigations. The SLT measured loaddisplacements were compared with the simulated results obtained using a model of the TZPILE program and using the modified borehole shear test method. Two analytical pile setup quantification methods, in terms of soil properties, were developed and validated. A new calibration procedure was developed to incorporate pile setup into LRFD.
Resumo:
In response to the mandate on Load and Resistance Factor Design (LRFD) implementations by the Federal Highway Administration (FHWA) on all new bridge projects initiated after October 1, 2007, the Iowa Highway Research Board (IHRB) sponsored these research projects to develop regional LRFD recommendations. The LRFD development was performed using the Iowa Department of Transportation (DOT) Pile Load Test database (PILOT). To increase the data points for LRFD development, develop LRFD recommendations for dynamic methods, and validate the results of LRFD calibration, 10 full-scale field tests on the most commonly used steel H-piles (e.g., HP 10 x 42) were conducted throughout Iowa. Detailed in situ soil investigations were carried out, push-in pressure cells were installed, and laboratory soil tests were performed. Pile responses during driving, at the end of driving (EOD), and at re-strikes were monitored using the Pile Driving Analyzer (PDA), following with the CAse Pile Wave Analysis Program (CAPWAP) analysis. The hammer blow counts were recorded for Wave Equation Analysis Program (WEAP) and dynamic formulas. Static load tests (SLTs) were performed and the pile capacities were determined based on the Davisson’s criteria. The extensive experimental research studies generated important data for analytical and computational investigations. The SLT measured load-displacements were compared with the simulated results obtained using a model of the TZPILE program and using the modified borehole shear test method. Two analytical pile setup quantification methods, in terms of soil properties, were developed and validated. A new calibration procedure was developed to incorporate pile setup into LRFD.
Resumo:
Les catastrophes sont souvent perçues comme des événements rapides et aléatoires. Si les déclencheurs peuvent être soudains, les catastrophes, elles, sont le résultat d'une accumulation des conséquences d'actions et de décisions inappropriées ainsi que du changement global. Pour modifier cette perception du risque, des outils de sensibilisation sont nécessaires. Des méthodes quantitatives ont été développées et ont permis d'identifier la distribution et les facteurs sous- jacents du risque.¦Le risque de catastrophes résulte de l'intersection entre aléas, exposition et vulnérabilité. La fréquence et l'intensité des aléas peuvent être influencées par le changement climatique ou le déclin des écosystèmes, la croissance démographique augmente l'exposition, alors que l'évolution du niveau de développement affecte la vulnérabilité. Chacune de ses composantes pouvant changer, le risque est dynamique et doit être réévalué périodiquement par les gouvernements, les assurances ou les agences de développement. Au niveau global, ces analyses sont souvent effectuées à l'aide de base de données sur les pertes enregistrées. Nos résultats montrent que celles-ci sont susceptibles d'être biaisées notamment par l'amélioration de l'accès à l'information. Elles ne sont pas exhaustives et ne donnent pas d'information sur l'exposition, l'intensité ou la vulnérabilité. Une nouvelle approche, indépendante des pertes reportées, est donc nécessaire.¦Les recherches présentées ici ont été mandatées par les Nations Unies et par des agences oeuvrant dans le développement et l'environnement (PNUD, l'UNISDR, la GTZ, le PNUE ou l'UICN). Ces organismes avaient besoin d'une évaluation quantitative sur les facteurs sous-jacents du risque, afin de sensibiliser les décideurs et pour la priorisation des projets de réduction des risques de désastres.¦La méthode est basée sur les systèmes d'information géographique, la télédétection, les bases de données et l'analyse statistique. Une importante quantité de données (1,7 Tb) et plusieurs milliers d'heures de calculs ont été nécessaires. Un modèle de risque global a été élaboré pour révéler la distribution des aléas, de l'exposition et des risques, ainsi que pour l'identification des facteurs de risque sous- jacent de plusieurs aléas (inondations, cyclones tropicaux, séismes et glissements de terrain). Deux indexes de risque multiples ont été générés pour comparer les pays. Les résultats incluent une évaluation du rôle de l'intensité de l'aléa, de l'exposition, de la pauvreté, de la gouvernance dans la configuration et les tendances du risque. Il apparaît que les facteurs de vulnérabilité changent en fonction du type d'aléa, et contrairement à l'exposition, leur poids décroît quand l'intensité augmente.¦Au niveau local, la méthode a été testée pour mettre en évidence l'influence du changement climatique et du déclin des écosystèmes sur l'aléa. Dans le nord du Pakistan, la déforestation induit une augmentation de la susceptibilité des glissements de terrain. Les recherches menées au Pérou (à base d'imagerie satellitaire et de collecte de données au sol) révèlent un retrait glaciaire rapide et donnent une évaluation du volume de glace restante ainsi que des scénarios sur l'évolution possible.¦Ces résultats ont été présentés à des publics différents, notamment en face de 160 gouvernements. Les résultats et les données générées sont accessibles en ligne (http://preview.grid.unep.ch). La méthode est flexible et facilement transposable à des échelles et problématiques différentes, offrant de bonnes perspectives pour l'adaptation à d'autres domaines de recherche.¦La caractérisation du risque au niveau global et l'identification du rôle des écosystèmes dans le risque de catastrophe est en plein développement. Ces recherches ont révélés de nombreux défis, certains ont été résolus, d'autres sont restés des limitations. Cependant, il apparaît clairement que le niveau de développement configure line grande partie des risques de catastrophes. La dynamique du risque est gouvernée principalement par le changement global.¦Disasters are often perceived as fast and random events. If the triggers may be sudden, disasters are the result of an accumulation of actions, consequences from inappropriate decisions and from global change. To modify this perception of risk, advocacy tools are needed. Quantitative methods have been developed to identify the distribution and the underlying factors of risk.¦Disaster risk is resulting from the intersection of hazards, exposure and vulnerability. The frequency and intensity of hazards can be influenced by climate change or by the decline of ecosystems. Population growth increases the exposure, while changes in the level of development affect the vulnerability. Given that each of its components may change, the risk is dynamic and should be reviewed periodically by governments, insurance companies or development agencies. At the global level, these analyses are often performed using databases on reported losses. Our results show that these are likely to be biased in particular by improvements in access to information. International losses databases are not exhaustive and do not give information on exposure, the intensity or vulnerability. A new approach, independent of reported losses, is necessary.¦The researches presented here have been mandated by the United Nations and agencies working in the development and the environment (UNDP, UNISDR, GTZ, UNEP and IUCN). These organizations needed a quantitative assessment of the underlying factors of risk, to raise awareness amongst policymakers and to prioritize disaster risk reduction projects.¦The method is based on geographic information systems, remote sensing, databases and statistical analysis. It required a large amount of data (1.7 Tb of data on both the physical environment and socio-economic parameters) and several thousand hours of processing were necessary. A comprehensive risk model was developed to reveal the distribution of hazards, exposure and risk, and to identify underlying risk factors. These were performed for several hazards (e.g. floods, tropical cyclones, earthquakes and landslides). Two different multiple risk indexes were generated to compare countries. The results include an evaluation of the role of the intensity of the hazard, exposure, poverty, governance in the pattern and trends of risk. It appears that the vulnerability factors change depending on the type of hazard, and contrary to the exposure, their weight decreases as the intensity increases.¦Locally, the method was tested to highlight the influence of climate change and the ecosystems decline on the hazard. In northern Pakistan, deforestation exacerbates the susceptibility of landslides. Researches in Peru (based on satellite imagery and ground data collection) revealed a rapid glacier retreat and give an assessment of the remaining ice volume as well as scenarios of possible evolution.¦These results were presented to different audiences, including in front of 160 governments. The results and data generated are made available online through an open source SDI (http://preview.grid.unep.ch). The method is flexible and easily transferable to different scales and issues, with good prospects for adaptation to other research areas. The risk characterization at a global level and identifying the role of ecosystems in disaster risk is booming. These researches have revealed many challenges, some were resolved, while others remained limitations. However, it is clear that the level of development, and more over, unsustainable development, configures a large part of disaster risk and that the dynamics of risk is primarily governed by global change.
Resumo:
The aim of this study was to describe the demographic, clinicopathological, biological and morphometric features of Libyan breast cancer patients. The supporting value of nuclear morphometry and static image cytometry in the sensitivity for detecting breast cancer in conventional fine-needle aspiration biopsies were estimated. The findings were compared with findings in breast cancer in Finland and Nigeria. In addation, the value of ER and PR were evaluated. There were 131 histological samples, 41 cytological samples, and demographic and clinicopathological data from 234 Libyan patients. The Libyan breast cancer is dominantly premenopausal and in this feature it is similar to breast cancer in sub-Saharan Africans, but clearly different from breast cancer in Europeans, whose cancers are dominantly postmenopausal in character. At presention most Libyan patients have locally advanced disease, which is associated with poor survival rates. Nuclear morphometry and image DNA cytometry agree with earlier published data in the Finnish population and indicate that nuclear size and DNA analysis of nuclear content can be used to increase the cytological sensitivity and specificity in doubtful breast lesions, particularly when free cell sampling method is used. Combination of the morphometric data with earlier free cell data gave the following diagnostic guidelines: Range of overlap in free cell samples: 55 μm2 -71 μm2. Cut-off values for diagnostic purposes: Mean nuclear area (MNA) >54 μm2 for 100% detection of malignant cases (specificity 84 %), MNA < 72 μm2 for 100% detection of benign cases (sensitivity 91%). Histomorphometry showed a significant correlation between the MNA and most clinicopathological features, with the strongest association observed for histological grade (p <0.0001). MNA seems to be a prognosticator in Libyan breast cancer (Pearson’s test r = - 0.29, p = 0.019), but at lower level of significance than in the European material. A corresponding relationship was not found in shape-related morphometric features. ER and PR staining scores were in correlation with the clinical stage (p= 0.017, and 0.015, respectively), and also associated with lymph node negative patients (p=0.03, p=0.05, respectively). Receptor-positive (HR+) patients had a better survival. The fraction of HR+ cases among Libyan breast cancers is about the same as the fraction of positive cases in European breast cancer. The study suggests that also weak staining (corresponding to as few as 1% positive cells) has prognostic value. The prognostic significance may be associated with the practice to use antihormonal therapy in HR+ cases. The low survival and advanced presentation is associated with active cell proliferation, atypical nuclear morphology and aneuploid nuclear DNA content in Libyan breast cancer patients. The findings support the idea that breast cancer is not one type of disease, but should probably be classified into premenopausal and post menopausal types.
Resumo:
Guided by the social-ecological conceptualization of bullying, this thesis examines the implications of classroom and school contexts—that is, students’ shared microsystems—for peer-to-peer bullying and antibullying practices. Included are four original publications, three of which are empirical studies utilizing data from a large Finnish sample of students in the upper grade levels of elementary school. Both self- and peer reports of bullying and victimization are utilized, and the hierarchical nature of the data collected from students nested within school ecologies is accounted for by multilevel modeling techniques. The first objective of the thesis is to simultaneously examine risk factors for victimization at individual, classroom, and school levels (Study I). The second objective is to uncover the individual- and classroom-level working mechanisms of the KiVa antibullying program which has been shown to be effective in reducing bullying problems in Finnish schools (Study II). Thirdly, an overview of the extant literature on classroom- and school-level contributions to bullying and victimization is provided (Study III). Finally, attention is paid to the assessment of victimization and, more specifically, to how the classroom context influences the concordance between self- and peer reports of victimization (Study IV). Findings demonstrate the multiple ways in which contextual factors, and importantly students’ perceptions thereof, contribute to the bullying dynamic and efforts to counteract it. Whereas certain popular beliefs regarding the implications of classroom and school contexts do not receive support, the role of peer contextual factors and the significance of students’ perceptions of teachers’ attitudes toward bullying are highlighted. Directions for future research and school-based antibullying practices are suggested.
Resumo:
The purpose of the present study was to measure contrast sensitivity to equiluminant gratings using steady-state visual evoked cortical potential (ssVECP) and psychophysics. Six healthy volunteers were evaluated with ssVECPs and psychophysics. The visual stimuli were red-green or blue-yellow horizontal sinusoidal gratings, 5° × 5°, 34.3 cd/m2 mean luminance, presented at 6 Hz. Eight spatial frequencies from 0.2 to 8 cpd were used, each presented at 8 contrast levels. Contrast threshold was obtained by extrapolating second harmonic amplitude values to zero. Psychophysical contrast thresholds were measured using stimuli at 6 Hz and static presentation. Contrast sensitivity was calculated as the inverse function of the pooled cone contrast threshold. ssVECP and both psychophysical contrast sensitivity functions (CSFs) were low-pass functions for red-green gratings. For electrophysiology, the highest contrast sensitivity values were found at 0.4 cpd (1.95 ± 0.15). ssVECP CSF was similar to dynamic psychophysical CSF, while static CSF had higher values ranging from 0.4 to 6 cpd (P < 0.05, ANOVA). Blue-yellow chromatic functions showed no specific tuning shape; however, at high spatial frequencies the evoked potentials showed higher contrast sensitivity than the psychophysical methods (P < 0.05, ANOVA). Evoked potentials can be used reliably to evaluate chromatic red-green CSFs in agreement with psychophysical thresholds, mainly if the same temporal properties are applied to the stimulus. For blue-yellow CSF, correlation between electrophysiology and psychophysics was poor at high spatial frequency, possibly due to a greater effect of chromatic aberration on this kind of stimulus.
Resumo:
Chemical sensors have growing interest in the determination of food additives, which are creating toxicity and may cause serious health concern, drugs and metal ions. A chemical sensor can be defined as a device that transforms chemical information, ranging from the concentration of a specific sample component to total composition analysis, into an analytically useful signal. The chemical information may be generated from a chemical reaction of the analyte or from a physical property of the system investigated. Two main steps involved in the functioning of a chemical sensor are recognition and transduction. Chemical sensors employ specific transduction techniques to yield analyte information. The most widely used techniques employed in chemical sensors are optical absorption, luminescence, redox potential etc. According to the operating principle of the transducer, chemical sensors may be classified as electrochemical sensors, optical sensors, mass sensitive sensors, heat sensitive sensors etc. Electrochemical sensors are devices that transform the effect of the electrochemical interaction between analyte and electrode into a useful signal. They are very widespread as they use simple instrumentation, very good sensitivity with wide linear concentration ranges, rapid analysis time and simultaneous determination of several analytes. These include voltammetric, potentiometric and amperometric sensors. Fluorescence sensing of chemical and biochemical analytes is an active area of research. Any phenomenon that results in a change of fluorescence intensity, anisotropy or lifetime can be used for sensing. The fluorophores are mixed with the analyte solution and excited at its corresponding wavelength. The change in fluorescence intensity (enhancement or quenching) is directly related to the concentration of the analyte. Fluorescence quenching refers to any process that decreases the fluorescence intensity of a sample. A variety of molecular rearrangements, energy transfer, ground-state complex formation and collisional quenching. Generally, fluorescence quenching can occur by two different mechanisms, dynamic quenching and static quenching. The thesis presents the development of voltammetric and fluorescent sensors for the analysis of pharmaceuticals, food additives metal ions. The developed sensors were successfully applied for the determination of analytes in real samples. Chemical sensors have multidisciplinary applications. The development and application of voltammetric and optical sensors continue to be an exciting and expanding area of research in analytical chemistry. The synthesis of biocompatible fluorophores and their use in clinical analysis, and the development of disposable sensors for clinical analysis is still a challenging task. The ability to make sensitive and selective measurements and the requirement of less expensive equipment make electrochemical and fluorescence based sensors attractive.
Resumo:
The rapid growth in high data rate communication systems has introduced new high spectral efficient modulation techniques and standards such as LTE-A (long term evolution-advanced) for 4G (4th generation) systems. These techniques have provided a broader bandwidth but introduced high peak-to-average power ratio (PAR) problem at the high power amplifier (HPA) level of the communication system base transceiver station (BTS). To avoid spectral spreading due to high PAR, stringent requirement on linearity is needed which brings the HPA to operate at large back-off power at the expense of power efficiency. Consequently, high power devices are fundamental in HPAs for high linearity and efficiency. Recent development in wide bandgap power devices, in particular AlGaN/GaN HEMT, has offered higher power level with superior linearity-efficiency trade-off in microwaves communication. For cost-effective HPA design to production cycle, rigorous computer aided design (CAD) AlGaN/GaN HEMT models are essential to reflect real response with increasing power level and channel temperature. Therefore, large-size AlGaN/GaN HEMT large-signal electrothermal modeling procedure is proposed. The HEMT structure analysis, characterization, data processing, model extraction and model implementation phases have been covered in this thesis including trapping and self-heating dispersion accounting for nonlinear drain current collapse. The small-signal model is extracted using the 22-element modeling procedure developed in our department. The intrinsic large-signal model is deeply investigated in conjunction with linearity prediction. The accuracy of the nonlinear drain current has been enhanced through several issues such as trapping and self-heating characterization. Also, the HEMT structure thermal profile has been investigated and corresponding thermal resistance has been extracted through thermal simulation and chuck-controlled temperature pulsed I(V) and static DC measurements. Higher-order equivalent thermal model is extracted and implemented in the HEMT large-signal model to accurately estimate instantaneous channel temperature. Moreover, trapping and self-heating transients has been characterized through transient measurements. The obtained time constants are represented by equivalent sub-circuits and integrated in the nonlinear drain current implementation to account for complex communication signals dynamic prediction. The obtained verification of this table-based large-size large-signal electrothermal model implementation has illustrated high accuracy in terms of output power, gain, efficiency and nonlinearity prediction with respect to standard large-signal test signals.