964 resultados para Methods time measurement
Resumo:
This study reviewed the subjective, clinical and radiological outcome of 71 patients (84 feet) treated by scarf osteotomy for hallux valgus deformity at our institution from 1995 to 1998 with an average follow-up time of 22 months (range, 17 to 48 months). At the time of follow-up, 39% of the patients were very satisfied, 50% were satisfied and 11% were not satisfied. The mean AOFAS score raised significantly from 43 points (14-68) preoperatively to 82 points (39 to 100) at follow-up (p < 0.001). The radiological angles including M1-M2, M1-P1, M1-M5 and DMAA improved significantly (p < 0.001). Among the 16 complications recorded, seven (8%) were minor and nine (11%) required an additional procedure. The scarf osteotomy of the first metatarsal coupled with a lateral soft-tissue release and, in three-quarters of our cases, with a basal closing wedge varisation osteotomy of the first phalanx, resulted in overall high satisfaction rate as well as significant clinical and radiological improvements in our series. Nevertheless, the range of motion of the first MP joint remained low: 30 degrees to 74 degrees in 52 patients (62%) and <30 degrees in four patients (5%). Furthermore, the mobility of the first ray as well as the consequences of the procedure in the sagittal plane need to be assessed more accurately, and this may be achieved by incorporating measurement of the plantar pressures in the forefoot area into the global rating system.
Resumo:
Self-measurement of blood pressure (SMBP) is increasingly used to assess blood pressure outside the medical setting. A prerequisite for the wide use of SMBP is the availability of validated devices providing reliable readings when they are handled by patients. This is the case today with a number of fully automated oscillometric apparatuses. A major advantage of SMBP is the great number of readings, which is linked with high reproducibility. Given these advantages, one of the major indications for SMBP is the need for evaluation of antihypertensive treatment, either for individual patients in everyday practice or in clinical trials intended to characterize the effects of blood-pressure-lowering medications. In fact, SMBP is particularly helpful for evaluating resistant hypertension and detecting white-coat effect in patients exhibiting high office blood pressure under antihypertensive therapy. SMBP might also motivate the patient and improve his or her adherence to long-term treatment. Moreover, SMBP can be used as a sensitive technique for evaluating the effect of antihypertensive drugs in clinical trials; it increases the power of comparative trials, allowing one to study fewer patients or to detect smaller differences in blood pressure than would be possible with the office measurement. Therefore, SMBP can be regarded as a valuable technique for the follow-up of treated patients as well as for the assessment of antihypertensive drugs in clinical trials.
Resumo:
Tässä työssä on esitetty väsyttävän kuormituksen mittaamiseen ja mittausdatan jälkikäsittelyyn sekä väsymismitoitukseen liittyviä menetelmiä. Menetelmien sovelluskohteena oli metsäkoneen kuormain, joka on väsyttävästi kuormitettu hitsattu rakenne. Teoriaosassa on kuvattu väsymisilmiötä ja väsymismitoitusmenetelmiä sekä kuormitusten tunnistamiseen ja mittausten jälkikäsittelyyn liittyviä menetelmiä. Yleisimmin käytettyjen väsymismitoitusmenetelmien rinnalle on esitetty luotettavuuteen perustuvaa väsymismitoitusmenetelmää. Kuormainten suunnittelussa on keveys- j a kestoikävaatimusten takia erityisen suuri merkitys väsymisen huomioimisella. Rakenteille on ominaista tietyt toiminnan kannalta välttämättömät hitsatut yksityiskohdat, jotka usein määräävät koko rakenteen kestoiän. Koska nämä ongelmakohdat pystytään useimmiten tunnistamaan jo suunnitteluvaiheessa, voidaan yksityiskohtien muotoilulla usein parantaa huomattavasti koko rakenteen kestoikää. Näiden yksityiskohtien optimointi on osittain mahdollista toteuttaa ilman kuormituskertymätietoa, mutta useimmiten kuormitusten tunnistaminen on edellytys parhaan ratkaisun löytymiselle. Tällöin toistaiseksi paras keino todellisen väsyttävän kuormituksen tunnistamiseksi on pitkäaikaiset kenttämittaukset. Kenttämittauksilla selvitetään rakenteeseen kohdistuvat kuormitukset venymäliuskojen avulla. Kuormitusten tunnistamisella on erityisen suuri merkitys kun halutaan määrittää rakenteen kestoikä. Väsyminen ja väsyttävä kuormitus ovat kuitenkin tilastollisia muuttujia j a yksittäiselle rakenteelle ei ole mahdollista määrittää tarkkaa k estoikää. Tilastollisia menetelmiä käyttäen on kuitenkin mahdollista määrittää rakenteen vaurioitumisriski. Laskettaessa vaurioitumisriskiä suurelle määrälle yksittäisiä rakenteita voidaan muodostaa tarkkojakin ennusteita mahdollisten vaurioiden lukumäärästä. Tällöin kuormituskertymätiedosta voi olla tavanomaisen suunnittelun lisäksi laajempaa hyötyä esimerkiksi takuukäsittelyssä. Tässä työssä on sovellettu esitettyjä teorioita käytännössä metsäkoneen harvesterin puomiston väsymistarkasteluun. Kyseisen rakenteen kuormituksia mitattiin kahden viikon aikana yhteensä 35 tuntia, jonka perusteella laskettiin väsyttävän kuormituksen tilastollinen jakauma esimerkkitapaukselle. Mittauksen perusteella ei voitu tehdä kuitenkaan johtopäätöksiä tuotteen koko elinkaaren kuormituksista eikä muiden samanlaisten tuotteiden kuormituksista, koska mitattu otos oli suhteellisen lyhyt ja rajoittui vain yhteen käyttäjään ja muutamaan käyttökohteeseen. Menetelmien testaamiseksi kyseinen otos oli kuitenkin riittävä. Kuormituskertymätietoa käytettiin hyväksi myös laatumääritysten muodostamisessaesimerkkitapaukselle. Murtumismekaniikkaan perustuvalla menetelmällä arvioitiinharvesteripilarin valun mahdollisten valuvirheiden suurin sallittu koko. Luotettavuuteen pohjautuvan mitoitusmenettelyn tarve näyttää olevanlisääntymässä, joten pitkäaikaisten kenttämittausten tehokas hyödyntäminen tulee olemaan keskeinen osa väsymismitoitusta lähitulevaisuudessa. Menetelmiä olisi mahdollista tehostaa yhdistämällä kuormituskertymään erilaisia kuormitusten suhteen riippuvia tunnettuja suureita kuten käsiteltävän puun halkaisija. Todellisettuotekohtaiset tilastolliset jakaumat kuormituksista voitaisiin muodostaa mahdollisesti tehokkaammin, jos esimerkiksi kuormitusten riippuvuus metsätyypistä pystyttäisiin ensin määrittämään.
Dynamic single cell measurements of kinase activity by synthetic kinase activity relocation sensors.
Resumo:
BACKGROUND: Mitogen activated protein kinases (MAPK) play an essential role in integrating extra-cellular signals and intra-cellular cues to allow cells to grow, adapt to stresses, or undergo apoptosis. Budding yeast serves as a powerful system to understand the fundamental regulatory mechanisms that allow these pathways to combine multiple signals and deliver an appropriate response. To fully comprehend the variability and dynamics of these signaling cascades, dynamic and quantitative single cell measurements are required. Microscopy is an ideal technique to obtain these data; however, novel assays have to be developed to measure the activity of these cascades. RESULTS: We have generated fluorescent biosensors that allow the real-time measurement of kinase activity at the single cell level. Here, synthetic MAPK substrates were engineered to undergo nuclear-to-cytoplasmic relocation upon phosphorylation of a nuclear localization sequence. Combination of fluorescence microscopy and automated image analysis allows the quantification of the dynamics of kinase activity in hundreds of single cells. A large heterogeneity in the dynamics of MAPK activity between individual cells was measured. The variability in the mating pathway can be accounted for by differences in cell cycle stage, while, in the cell wall integrity pathway, the response to cell wall stress is independent of cell cycle stage. CONCLUSIONS: These synthetic kinase activity relocation sensors allow the quantification of kinase activity in live single cells. The modularity of the architecture of these reporters will allow their application in many other signaling cascades. These measurements will allow to uncover new dynamic behaviour that previously could not be observed in population level measurements.
Resumo:
Coronary artery disease is an atherosclerotic disease, which leads to narrowing of coronary arteries, deteriorated myocardial blood flow and myocardial ischaemia. In acute myocardial infarction, a prolonged period of myocardial ischaemia leads to myocardial necrosis. Necrotic myocardium is replaced with scar tissue. Myocardial infarction results in various changes in cardiac structure and function over time that results in “adverse remodelling”. This remodelling may result in a progressive worsening of cardiac function and development of chronic heart failure. In this thesis, we developed and validated three different large animal models of coronary artery disease, myocardial ischaemia and infarction for translational studies. In the first study the coronary artery disease model had both induced diabetes and hypercholesterolemia. In the second study myocardial ischaemia and infarction were caused by a surgical method and in the third study by catheterisation. For model characterisation, we used non-invasive positron emission tomography (PET) methods for measurement of myocardial perfusion, oxidative metabolism and glucose utilisation. Additionally, cardiac function was measured by echocardiography and computed tomography. To study the metabolic changes that occur during atherosclerosis, a hypercholesterolemic and diabetic model was used with [18F] fluorodeoxyglucose ([18F]FDG) PET-imaging technology. Coronary occlusion models were used to evaluate metabolic and structural changes in the heart and the cardioprotective effects of levosimendan during post-infarction cardiac remodelling. Large animal models were used in testing of novel radiopharmaceuticals for myocardial perfusion imaging. In the coronary artery disease model, we observed atherosclerotic lesions that were associated with focally increased [18F]FDG uptake. In heart failure models, chronic myocardial infarction led to the worsening of systolic function, cardiac remodelling and decreased efficiency of cardiac pumping function. Levosimendan therapy reduced post-infarction myocardial infarct size and improved cardiac function. The novel 68Ga-labeled radiopharmaceuticals tested in this study were not successful for the determination of myocardial blood flow. In conclusion, diabetes and hypercholesterolemia lead to the development of early phase atherosclerotic lesions. Coronary artery occlusion produced considerable myocardial ischaemia and later infarction following myocardial remodelling. The experimental models evaluated in these studies will enable further studies concerning disease mechanisms, new radiopharmaceuticals and interventions in coronary artery disease and heart failure.
Resumo:
Membraani on ohut kalvo, jossa on pieniä nanomittakaavan reikiä, jotka erottavat partikkelit ja liuenneet yhdisteet liuoksesta. Membraanisuodatuksen käyttö on lisääntynyt merkittävästi vedenpuhdistuksessa, johtuen lisääntyneestä puhtaan veden tarpeesta ja tiukentuneista ympäristövaatimuksista. Tässä työssä esitellään reaaliaikaisia mittausmenetelmiä membraanin likaantumisen seurantaan. Esiteltyjä menetelmiä ovat suora havainnointi pinnan läpi, lasertriangulometria, varjoanalyysi, taittokykymittaus, kuvakatkaisu-menetelmä, partikkelin nopeusmääritys, radioisotooppinen merkintä ja ydinmagneettinen resonanssispektrometria. Mittausmenetelmien avulla likakerroksen paksuutta ja sen leviämistä on mahdollista seurata reaaliaikaisesti. Mittausmenetelmien soveltuvuus olemassa oleviin prosesseihin on vielä epävarmaa. Suurin osa menetelmistä on rajoittunut tiettyyn membraanin materiaaliin, tietynlaiseen membraanisuodatusprosessin rakenteeseen tai tiettyihin olosuhteisiin. Vallitsevien prosessiolosuhteiden lisäksi mittausanturin tulisi kestää myös puhdistusolosuhteet. Lisätutkimuksia tarvitaan, jotta voidaan löytää toimiva laitekokonaisuus tarvittavan tiedon tuottamiseen.
Resumo:
Tässä diplomityössä käsiteltiin spektrometrisia online-mittausmenetelmiä jätteiden kemiallisten ja fysikaalisten ominaisuuksien määrittämiseksi. Tavoitteena oli selvittää, mitä ominaisuuksia menetelmillä voidaan mitata ja kuinka luotettavia tuloksia mittauksilla saadaan. Diplomityössä suoritettiin kirjallisuuskatsaus, jossa käsiteltiin kolmen spektrometrisen menetelmän soveltuvuutta reaaliaikaisiin jätemittauksiin. Työn empiirisessä osassa FPXRFanalysaattorilla mitattiin neljän eri jätenäytteen alkuainepitoisuuksia. Mittauksen tarkoituksena oli selvittää, mitä alkuaineita menetelmällä voidaan mitata. FPXRF-analysaattorilla saatuja tuloksia verrattiin ICP-MS-menetelmällä saatuihin tuloksiin regressioanalyysin avulla. Työssä todettiin, että FPXRF-analysaattori sopii parhaiten kaliumin, kalsiumin, ja raudan pitoisuuksien määrittämiseen. Lisäksi lyijyn, sinkin, kromin, kloorin, kuparin, kadmiumin, arseenin, fosforin, molybdeenin ja vanadiinin määrittäminen on mahdollista, mutta tarkan pitoisuuden saamiseksi laboratoriomenetelmien käyttö voi olla tarpeen. Tutkituista jätenäytteistä menetelmä soveltui parhaiten tuhkalle ja kompostille niiden fyysisten ominaisuuksien, kuten homogeenisuuden ja kosteuspitoisuuden takia. Biojätteelle menetelmä soveltui huonosti. FPXRF-analysaattorin luotettavuuteen vaikuttaa näytteen kosteuspitoisuus, homogeenisuus, partikkelikoko, mittaustapa ja laitteen kalibrointi. Työssä tarkastelluilla menetelmillä ei voida tällä hetkellä täysin korvata laboratorioanalyyseja. FPXRF-analysaattoria voidaan kuitenkin käyttää kvalitatiiviseen tai semikvantitatiiviseen haitta-aineiden analysointiin, millä voidaan vähentää kalliiden laboratorioanalyysien tarvetta.
Resumo:
Sonar signal processing comprises of a large number of signal processing algorithms for implementing functions such as Target Detection, Localisation, Classification, Tracking and Parameter estimation. Current implementations of these functions rely on conventional techniques largely based on Fourier Techniques, primarily meant for stationary signals. Interestingly enough, the signals received by the sonar sensors are often non-stationary and hence processing methods capable of handling the non-stationarity will definitely fare better than Fourier transform based methods.Time-frequency methods(TFMs) are known as one of the best DSP tools for nonstationary signal processing, with which one can analyze signals in time and frequency domains simultaneously. But, other than STFT, TFMs have been largely limited to academic research because of the complexity of the algorithms and the limitations of computing power. With the availability of fast processors, many applications of TFMs have been reported in the fields of speech and image processing and biomedical applications, but not many in sonar processing. A structured effort, to fill these lacunae by exploring the potential of TFMs in sonar applications, is the net outcome of this thesis. To this end, four TFMs have been explored in detail viz. Wavelet Transform, Fractional Fourier Transfonn, Wigner Ville Distribution and Ambiguity Function and their potential in implementing five major sonar functions has been demonstrated with very promising results. What has been conclusively brought out in this thesis, is that there is no "one best TFM" for all applications, but there is "one best TFM" for each application. Accordingly, the TFM has to be adapted and tailored in many ways in order to develop specific algorithms for each of the applications.
Resumo:
The Bureau International des Poids et Mesures, the BIPM, was established by Article 1 of the Convention du Mètre, on 20 May 1875, and is charged with providing the basis for a single, coherent system of measurements to be used throughout the world. The decimal metric system, dating from the time of the French Revolution, was based on the metre and the kilogram. Under the terms of the 1875 Convention, new international prototypes of the metre and kilogram were made and formally adopted by the first Conférence Générale des Poids et Mesures (CGPM) in 1889. Over time this system developed, so that it now includes seven base units. In 1960 it was decided at the 11th CGPM that it should be called the Système International d’Unités, the SI (in English: the International System of Units). The SI is not static but evolves to match the world’s increasingly demanding requirements for measurements at all levels of precision and in all areas of science, technology, and human endeavour. This document is a summary of the SI Brochure, a publication of the BIPM which is a statement of the current status of the SI. The seven base units of the SI, listed in Table 1, provide the reference used to define all the measurement units of the International System. As science advances, and methods of measurement are refined, their definitions have to be revised. The more accurate the measurements, the greater the care required in the realization of the units of measurement.
Resumo:
The purpose of this study was to apply and compare two time-domain analysis procedures in the determination of oxygen uptake (VO2) kinetics in response to a pseudorandom binary sequence (PRBS) exercise test. PRBS exercise tests have typically been analysed in the frequency domain. However, the complex interpretation of frequency responses may have limited the application of this procedure in both sporting and clinical contexts, where a single time measurement would facilitate subject comparison. The relative potential of both a mean response time (MRT) and a peak cross-correlation time (PCCT) was investigated. This study was divided into two parts: a test-retest reliability study (part A), in which 10 healthy male subjects completed two identical PRBS exercise tests, and a comparison of the VO2 kinetics of 12 elite endurance runners (ER) and 12 elite sprinters (SR; part B). In part A, 95% limits of agreement were calculated for comparison between MRT and PCCT. The results of part A showed no significant difference between test and retest as assessed by MRT [mean (SD) 42.2 (4.2) s and 43.8 (6.9) s] or by PCCT [21.8 (3.7) s and 22.7 (4.5) s]. Measurement error (%) was lower for MRT in comparison with PCCT (16% and 25%, respectively). In part B of the study, the VO2 kinetics of ER were significantly faster than those of SR, as assessed by MRT [33.4 (3.4) s and 39.9 (7.1) s, respectively; P<0.01] and PCCT [20.9 (3.8) s and 24.8 (4.5) s; P < 0.05]. It is possible that either analysis procedure could provide a single test measurement Of VO2 kinetics; however, the greater reliability of the MRT data suggests that this method has more potential for development in the assessment Of VO2 kinetics by PRBS exercise testing.
Resumo:
In this paper, we propose a new method of measuring the very slow paramagnetic ion diffusion coefficient using a commercial high-resolution spectrometer. If there are distinct paramagnetic ions influencing the hydrogen nuclear magnetic relaxation time differently, their diffusion coefficients can be measured separately. A cylindrical phantom filled with Fricke xylenol gel solution and irradiated with gamma rays was used to validate the method. The Fricke xylenol gel solution was prepared with 270 Bloom porcine gelatin, the phantom was irradiated with gamma rays originated from a (60)Co source and a high-resolution 200 MHz nuclear magnetic resonance (NMR) spectrometer was used to obtain the phantom (1)H profile in the presence of a linear magnetic field gradient. By observing the temporal evolution of the phantom NMR profile, an apparent ferric ion diffusion coefficient of 0.50 mu m(2)/ms due to ferric ions diffusion was obtained. In any medical process where the ionizing radiation is used, the dose planning and the dose delivery are the key elements for the patient safety and success of treatment. These points become even more important in modern conformal radio therapy techniques, such as stereotactic radiosurgery, where the delivered dose in a single session of treatment can be an order of magnitude higher than the regular doses of radiotherapy. Several methods have been proposed to obtain the three-dimensional (3-D) dose distribution. Recently, we proposed an alternative method for the 3-D radiation dose mapping, where the ionizing radiation modifies the local relative concentration of Fe(2+)/Fe(3+) in a phantom containing Fricke gel and this variation is associated to the MR image intensity. The smearing of the intensity gradient is proportional to the diffusion coefficient of the Fe(3+) and Fe(2+) in the phantom. There are several methods for measurement of the ionic diffusion using NMR, however, they are applicable when the diffusion is not very slow.
Resumo:
We discuss the development and performance of a low-power sensor node (hardware, software and algorithms) that autonomously controls the sampling interval of a suite of sensors based on local state estimates and future predictions of water flow. The problem is motivated by the need to accurately reconstruct abrupt state changes in urban watersheds and stormwater systems. Presently, the detection of these events is limited by the temporal resolution of sensor data. It is often infeasible, however, to increase measurement frequency due to energy and sampling constraints. This is particularly true for real-time water quality measurements, where sampling frequency is limited by reagent availability, sensor power consumption, and, in the case of automated samplers, the number of available sample containers. These constraints pose a significant barrier to the ubiquitous and cost effective instrumentation of large hydraulic and hydrologic systems. Each of our sensor nodes is equipped with a low-power microcontroller and a wireless module to take advantage of urban cellular coverage. The node persistently updates a local, embedded model of flow conditions while IP-connectivity permits each node to continually query public weather servers for hourly precipitation forecasts. The sampling frequency is then adjusted to increase the likelihood of capturing abrupt changes in a sensor signal, such as the rise in the hydrograph – an event that is often difficult to capture through traditional sampling techniques. Our architecture forms an embedded processing chain, leveraging local computational resources to assess uncertainty by analyzing data as it is collected. A network is presently being deployed in an urban watershed in Michigan and initial results indicate that the system accurately reconstructs signals of interest while significantly reducing energy consumption and the use of sampling resources. We also expand our analysis by discussing the role of this approach for the efficient real-time measurement of stormwater systems.
Resumo:
In the tropical Atlantic Forest, 42 canopy gaps had their areas estimated using four different field methods of measurement: Runkle, Brokaw and Green [Runkle, J.R., 1981. Gap formation in some old-growth forests of the eastern United States. Ecology 62, 1041-1051; Brokaw, N.V.L., 1982. The definition of treefall gap and its effect on measures of forest dynamics. Biotropica 14, 158-160; Green, P.T., 1996. Canopy Gaps in rain forest on Christmas Island, Indian Ocean: size distribution and methods of measurement. J. Trop. Ecol. 12, 427-434] and a new method proposed in this work. It was found that within the same gap delimitation, average gap size varied from 56.0 up to 88.3 m(3) while total sum of gap area varied from 2351.3 to 3707.9 m(3) Differences among all methods and between pairs of method proved to be statistically significant. As a consequence, gap size-class distribution was also different between methods. When one method is held as a standard, deviation on average values of gap size ranged between 11.8 and 59.7% as deviations on single gap size can reach 172.8%. Implications on forest dynamics were expressed by the forest turnover rate that was 24% faster or 15% slower depending on the method adopted for gap measurement. Based on my results and on methods' evaluation, the use of a new method is proposed here for future research involving the measure of gap size in forest ecosystems. Finally, it is concluded that forest comparisons disregarding the influence of different methods of gap measurement should be reconsidered. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
This paper proposes to use a state-space technique to represent a frequency dependent line for simulating electromagnetic transients directly in time domain. The distributed nature of the line is represented by a multiple 1t section network made up of the lumped parameters and the frequency dependence of the per unit longitudinal parameters is matched by using a rational function. The rational function is represented by its equivalent circuit with passive elements. This passive circuit is then inserted in each 1t circuit of the cascade that represents the line. Because the system is very sparse, it is possible to use a sparsity technique to store only nonzero elements of this matrix for saving space and running time. The model was used to simulate the energization process of a 10 km length single-phase line. ©2008 IEEE.
Resumo:
Pós-graduação em Agronomia (Energia na Agricultura) - FCA