991 resultados para DEFORMATION METHODS
Resumo:
Idiopathic pulmonary fibrosis (IPF) is an interstitial lung disease with unknown aetiology and poor prognosis. IPF is characterized by alveolar epithelial damage that leads tissue remodelling and ultimately to the loss of normal lung architecture and function. Treatment has been focused on anti-inflammatory therapies, but due to their poor efficacy new therapeutic modalities are being sought. There is a need for early diagnosis and also for differential diagnostic markers for IPF and other interstitial lung diseases. The study utilized patient material obtained from bronchoalveolar lavage (BAL), diagnostic biopsies or lung transplantation. Human pulmonary fibroblast cell cultures were propagated and asbestos-induced pulmonary fibrosis in mice was used as an experimental animal model of IPF. The possible markers for IPF were scanned by immunohistochemistry, RT-PCR, ELISA and western blot. Matrix metalloproteinases (MMPs) are proteolytic enzymes that participate in tissue remodelling. Microarray studies have introduced potential markers that could serve as additional tools for the assessment of IPF and one of the most promising was MMP 7. MMP-7 protein levels were measured in the BAL fluid of patients with idiopathic interstitial lung diseases or idiopathic cough. MMP-7 was however similarly elevated in the BAL fluid of all these disorders and thus cannot be used as a differential diagnostic marker for IPF. Activation of transforming growth factor (TGF)-ß is considered to be a key element in the progression of IPF. Bone morphogenetic proteins (BMP) are negative regulators of intracellular TGF-ß signalling and BMP-4 signalling is in turn negatively regulated by gremlin. Gremlin was found to be highly upregulated in the IPF lungs and IPF fibroblasts. Gremlin was detected in the thickened IPF parenchyma and endothelium of small capillaries, whereas in non-specific interstitial pneumonia it localized predominantly in the alveolar epithelium. Parenchymal gremlin immunoreactivity might indicate IPF-type interstitial pneumonia. Gremlin mRNA levels were higher in patients with end-stage fibrosis suggesting that gremlin might be a marker for more advanced disease. Characterization of the fibroblastic foci in the IPF lungs showed that immunoreactivity to platelet-derived growth factor (PDGF) receptor-α and PDGF receptor-β was elevated in IPF parenchyma, but the fibroblastic foci showed only minor immunoreactivity to the PDGF receptors or the antioxidant peroxiredoxin II. Ki67 positive cells were also observed predominantly outside the fibroblastic foci, suggesting that the fibroblastic foci may not be composed of actively proliferating cells. When inhibition of profibrotic PDGF-signalling by imatinib mesylate was assessed, imatinib mesylate reduced asbestos-induced pulmonary fibrosis in mice as well as human pulmonary fibroblast migration in vitro but it had no effect on the lung inflammation.
Resumo:
Objectives In China, “serious road traffic crashes” (SRTCs) are those in which there are 10-30 fatalities, 50-100 serious injuries or a total cost of 50-100 million RMB ($US8-16m), and “particularly serious road traffic crashes” (PSRTCs) are those which are more severe or costly. Due to the large number of fatalities and injuries as well as the negative public reaction they elicit, SRTCs and PSRTCs have become great concerns to China during recent years. The aim of this study is to identify the main factors contributing to these road traffic crashes and to propose preventive measures to reduce their number. Methods 49 contributing factors of the SRTCs and PSRTCs that occurred from 2007 to 2013 were collected from the database “In-depth Investigation and Analysis System for Major Road traffic crashes” (IIASMRTC) and were analyzed through the integrated use of principal component analysis and hierarchical clustering to determine the primary and secondary groups of contributing factors. Results Speeding and overloading of passengers were the primary contributing factors, featuring in up to 66.3% and 32.6% of accidents respectively. Two secondary contributing factors were road-related: lack of or nonstandard roadside safety infrastructure, and slippery roads due to rain, snow or ice. Conclusions The current approach to SRTCs and PSRTCs is focused on the attribution of responsibility and the enforcement of regulations considered relevant to particular SRTCs and PSRTCs. It would be more effective to investigate contributing factors and characteristics of SRTCs and PSRTCs as a whole, to provide adequate information for safety interventions in regions where SRTCs and PSRTCs are more common. In addition to mandating of a driver training program and publicisation of the hazards associated with traffic violations, implementation of speed cameras, speed signs, markings and vehicle-mounted GPS are suggested to reduce speeding of passenger vehicles, while increasing regular checks by traffic police and passenger station staff, and improving transportation management to increase income of contractors and drivers are feasible measures to prevent overloading of people. Other promising measures include regular inspection of roadside safety infrastructure, and improving skid resistance on dangerous road sections in mountainous areas.
Resumo:
The hot deformation behaviour of Mg–3Al alloy has been studied using the processing-map technique. Compression tests were conducted in the temperature range 250–550 °C and strain rate range 3 × 10−4 to 102 s−1 and the flow stress data obtained from the tests were used to develop the processing map. The various domains in the map corresponding to different dissipative characteristics have been identified as follows: (i) grain boundary sliding (GBS) domain accommodated by slip controlled by grain boundary diffusion at slow strain-rates (<10−3 s−1) in the temperature range from 350 to 450 °C, (ii) two different dynamic recrystallization (DRX) domains with a peak efficiency of 42% at 550 °C/10−1 s−1 and 425 °C/102 s−1 governed by stress-assisted cross-slip and thermally activated climb as the respective rate controlling mechanisms and (iii) dynamic recovery (DRV) domain below 300 °C in the intermediate strain rate range from 3 × 10−2 to 3 × 10−1 s−1. The regimes of flow instability have also been delineated in the processing map using an instability criterion. Adiabatic shear banding at higher strain rates (>101 s−1) and solute drag by substitutional Al atoms at intermediate strain rates (3 × 10−2 to 3 × 10−1 s−1) in the temperature range (350–450 °C) are responsible for flow instability. The relevance of these mechanisms with reference to hot working practice of the material has been indicated. The processing maps of Mg–3Al alloy and as-cast Mg have been compared qualitatively to elucidate the effect of alloying with aluminum on the deformation behaviour of magnesium.
Resumo:
Pack ice is an aggregate of ice floes drifting on the sea surface. The forces controlling the motion and deformation of pack ice are air and water drag forces, sea surface tilt, Coriolis force and the internal force due to the interaction between ice floes. In this thesis, the mechanical behavior of compacted pack ice is investigated using theoretical and numerical methods, focusing on the three basic material properties: compressive strength, yield curve and flow rule. A high-resolution three-category sea ice model is applied to investigate the sea ice dynamics in two small basins, the whole Gulf Riga and the inside Pärnu Bay, focusing on the calibration of the compressive strength for thin ice. These two basins are on the scales of 100 km and 20 km, respectively, with typical ice thickness of 10-30 cm. The model is found capable of capturing the main characteristics of the ice dynamics. The compressive strength is calibrated to be about 30 kPa, consistent with the values from most large-scale sea ice dynamic studies. In addition, the numerical study in Pärnu Bay suggests that the shear strength drops significantly when the ice-floe size markedly decreases. A characteristic inversion method is developed to probe the yield curve of compacted pack ice. The basis of this method is the relationship between the intersection angle of linear kinematic features (LKFs) in sea ice and the slope of the yield curve. A summary of the observed LKFs shows that they can be basically divided into three groups: intersecting leads, uniaxial opening leads and uniaxial pressure ridges. Based on the available observed angles, the yield curve is determined to be a curved diamond. Comparisons of this yield curve with those from other methods show that it possesses almost all the advantages identified by the other methods. A new constitutive law is proposed, where the yield curve is a diamond and the flow rule is a combination of the normal and co-axial flow rule. The non-normal co-axial flow rule is necessary for the Coulombic yield constraint. This constitutive law not only captures the main features of forming LKFs but also takes the advantage of avoiding overestimating divergence during shear deformation. Moreover, this study provides a method for observing the flow rule for pack ice during deformation.
Resumo:
Visual content is a critical component of everyday social media, on platforms explicitly framed around the visual (Instagram and Vine), on those offering a mix of text and images in myriad forms (Facebook, Twitter, and Tumblr), and in apps and profiles where visual presentation and provision of information are important considerations. However, despite being so prominent in forms such as selfies, looping media, infographics, memes, online videos, and more, sociocultural research into the visual as a central component of online communication has lagged behind the analysis of popular, predominantly text-driven social media. This paper underlines the increasing importance of visual elements to digital, social, and mobile media within everyday life, addressing the significant research gap in methods for tracking, analysing, and understanding visual social media as both image-based and intertextual content. In this paper, we build on our previous methodological considerations of Instagram in isolation to examine further questions, challenges, and benefits of studying visual social media more broadly, including methodological and ethical considerations. Our discussion is intended as a rallying cry and provocation for further research into visual (and textual and mixed) social media content, practices, and cultures, mindful of both the specificities of each form, but also, and importantly, the ongoing dialogues and interrelations between them as communication forms.
Resumo:
Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.
Resumo:
The superconducting (or cryogenic) gravimeter (SG) is based on the levitation of a superconducting sphere in a stable magnetic field created by current in superconducting coils. Depending on frequency, it is capable of detecting gravity variations as small as 10-11ms-2. For a single event, the detection threshold is higher, conservatively about 10-9 ms-2. Due to its high sensitivity and low drift rate, the SG is eminently suitable for the study of geodynamical phenomena through their gravity signatures. I present investigations of Earth dynamics with the superconducting gravimeter GWR T020 at Metsähovi from 1994 to 2005. The history and key technical details of the installation are given. The data processing methods and the development of the local tidal model at Metsähovi are presented. The T020 is a part of the worldwide GGP (Global Geodynamics Project) network, which consist of 20 working station. The data of the T020 and of other participating SGs are available to the scientific community. The SG T020 have used as a long-period seismometer to study microseismicity and the Earth s free oscillation. The annual variation, spectral distribution, amplitude and the sources of microseism at Metsähovi were presented. Free oscillations excited by three large earthquakes were analyzed: the spectra, attenuation and rotational splitting of the modes. The lowest modes of all different oscillation types are studied, i.e. the radial mode 0S0, the "football mode" 0S2, and the toroidal mode 0T2. The very low level (0.01 nms-1) incessant excitation of the Earth s free oscillation was detected with the T020. The recovery of global and regional variations in gravity with the SG requires the modelling of local gravity effects. The most important of them is hydrology. The variation in the groundwater level at Metsähovi as measured in a borehole in the fractured bedrock correlates significantly (0.79) with gravity. The influence of local precipitation, soil moisture and snow cover are detectable in the gravity record. The gravity effect of the variation in atmospheric mass and that of the non-tidal loading by the Baltic Sea were investigated together, as sea level and air pressure are correlated. Using Green s functions it was calculated that a 1 metre uniform layer of water in the Baltic Sea increases the gravity at Metsähovi by 31 nms-2 and the vertical deformation is -11 mm. The regression coefficient for sea level is 27 nms-2m-1, which is 87% of the uniform model. These studies are associated with temporal height variations using the GPS data of Metsähovi permanent station. Results of long time series at Metsähovi demonstrated high quality of data and correctly carried out offsets and drift corrections. The superconducting gravimeter T020 has been proved to be an eminent and versatile tool in studies of the Earth dynamics.
Resumo:
An efficient and statistically robust solution for the identification of asteroids among numerous sets of astrometry is presented. In particular, numerical methods have been developed for the short-term identification of asteroids at discovery, and for the long-term identification of scarcely observed asteroids over apparitions, a task which has been lacking a robust method until now. The methods are based on the solid foundation of statistical orbital inversion properly taking into account the observational uncertainties, which allows for the detection of practically all correct identifications. Through the use of dimensionality-reduction techniques and efficient data structures, the exact methods have a loglinear, that is, O(nlog(n)), computational complexity, where n is the number of included observation sets. The methods developed are thus suitable for future large-scale surveys which anticipate a substantial increase in the astrometric data rate. Due to the discontinuous nature of asteroid astrometry, separate sets of astrometry must be linked to a common asteroid from the very first discovery detections onwards. The reason for the discontinuity in the observed positions is the rotation of the observer with the Earth as well as the motion of the asteroid and the observer about the Sun. Therefore, the aim of identification is to find a set of orbital elements that reproduce the observed positions with residuals similar to the inevitable observational uncertainty. Unless the astrometric observation sets are linked, the corresponding asteroid is eventually lost as the uncertainty of the predicted positions grows too large to allow successful follow-up. Whereas the presented identification theory and the numerical comparison algorithm are generally applicable, that is, also in fields other than astronomy (e.g., in the identification of space debris), the numerical methods developed for asteroid identification can immediately be applied to all objects on heliocentric orbits with negligible effects due to non-gravitational forces in the time frame of the analysis. The methods developed have been successfully applied to various identification problems. Simulations have shown that the methods developed are able to find virtually all correct linkages despite challenges such as numerous scarce observation sets, astrometric uncertainty, numerous objects confined to a limited region on the celestial sphere, long linking intervals, and substantial parallaxes. Tens of previously unknown main-belt asteroids have been identified with the short-term method in a preliminary study to locate asteroids among numerous unidentified sets of single-night astrometry of moving objects, and scarce astrometry obtained nearly simultaneously with Earth-based and space-based telescopes has been successfully linked despite a substantial parallax. Using the long-term method, thousands of realistic 3-linkages typically spanning several apparitions have so far been found among designated observation sets each spanning less than 48 hours.
Resumo:
"We thank MrGilder for his considered comments and suggestions for alternative analyses of our data. We also appreciate Mr Gilder’s support of our call for larger studies to contribute to the evidence base for preoperative loading with high-carbohydrate fluids..."
Resumo:
We examine the 2D plane-strain deformation of initially round, matrix-bonded, deformable single inclusions in isothermal simple shear using a recently introduced hyperelastoviscoplastic rheology. The broad parameter space spanned by the wide range of effective viscosities, yield stresses, relaxation times, and strain rates encountered in the ductile lithosphere is explored systematically for weak and strong inclusions, the effective viscosity of which varies with respect to the matrix. Most inclusion studies to date focused on elastic or purely viscous rheologies. Comparing our results with linear-viscous inclusions in a linear-viscous matrix, we observe significantly different shape evolution of weak and strong inclusions over most of the relevant parameter space. The evolution of inclusion inclination relative to the shear plane is more strongly affected by elastic and plastic contributions to rheology in the case of strong inclusions. In addition, we found that strong inclusions deform in the transient viscoelastic stress regime at high Weissenberg numbers (≥0.01) up to bulk shear strains larger than 3. Studies using the shapes of deformed objects for finite-strain analysis or viscosity-ratio estimation should establish carefully which rheology and loading conditions reflect material and deformation properties. We suggest that relatively strong, deformable clasts in shear zones retain stored energy up to fairly high shear strains. Hence, purely viscous models of clast deformation may overlook an important contribution to the energy budget, which may drive dissipation processes within and around natural inclusions.
Resumo:
Foliage density and leaf area index are important vegetation structure variables. They can be measured by several methods but few have been tested in tropical forests which have high structural heterogeneity. In this study, foliage density estimates by two indirect methods, the point quadrat and photographic methods, were compared with those obtained by direct leaf counts in the understorey of a wet evergreen forest in southern India. The point quadrat method has a tendency to overestimate, whereas the photographic method consistently and ignificantly underestimates foliage density. There was stratification within the understorey, with areas close to the ground having higher foliage densities.