942 resultados para Current Density Mapping Method
Resumo:
Spatial data analysis mapping and visualization is of great importance in various fields: environment, pollution, natural hazards and risks, epidemiology, spatial econometrics, etc. A basic task of spatial mapping is to make predictions based on some empirical data (measurements). A number of state-of-the-art methods can be used for the task: deterministic interpolations, methods of geostatistics: the family of kriging estimators (Deutsch and Journel, 1997), machine learning algorithms such as artificial neural networks (ANN) of different architectures, hybrid ANN-geostatistics models (Kanevski and Maignan, 2004; Kanevski et al., 1996), etc. All the methods mentioned above can be used for solving the problem of spatial data mapping. Environmental empirical data are always contaminated/corrupted by noise, and often with noise of unknown nature. That's one of the reasons why deterministic models can be inconsistent, since they treat the measurements as values of some unknown function that should be interpolated. Kriging estimators treat the measurements as the realization of some spatial randomn process. To obtain the estimation with kriging one has to model the spatial structure of the data: spatial correlation function or (semi-)variogram. This task can be complicated if there is not sufficient number of measurements and variogram is sensitive to outliers and extremes. ANN is a powerful tool, but it also suffers from the number of reasons. of a special type ? multiplayer perceptrons ? are often used as a detrending tool in hybrid (ANN+geostatistics) models (Kanevski and Maignank, 2004). Therefore, development and adaptation of the method that would be nonlinear and robust to noise in measurements, would deal with the small empirical datasets and which has solid mathematical background is of great importance. The present paper deals with such model, based on Statistical Learning Theory (SLT) - Support Vector Regression. SLT is a general mathematical framework devoted to the problem of estimation of the dependencies from empirical data (Hastie et al, 2004; Vapnik, 1998). SLT models for classification - Support Vector Machines - have shown good results on different machine learning tasks. The results of SVM classification of spatial data are also promising (Kanevski et al, 2002). The properties of SVM for regression - Support Vector Regression (SVR) are less studied. First results of the application of SVR for spatial mapping of physical quantities were obtained by the authorsin for mapping of medium porosity (Kanevski et al, 1999), and for mapping of radioactively contaminated territories (Kanevski and Canu, 2000). The present paper is devoted to further understanding of the properties of SVR model for spatial data analysis and mapping. Detailed description of the SVR theory can be found in (Cristianini and Shawe-Taylor, 2000; Smola, 1996) and basic equations for the nonlinear modeling are given in section 2. Section 3 discusses the application of SVR for spatial data mapping on the real case study - soil pollution by Cs137 radionuclide. Section 4 discusses the properties of the modelapplied to noised data or data with outliers.
Resumo:
Velocity-density tests conducted in the laboratory involved small 4-inch diameter by 4.58-inch-long compacted soil cylinders made up of 3 differing soil types and for varying degrees of density and moisture content, the latter being varied well beyond optimum moisture values. Seventeen specimens were tested, 9 with velocity determinations made along two elements of the cylinder, 180 degrees apart, and 8 along three elements, 120 degrees apart. Seismic energy was developed by blows of a small tack hammer on a 5/8-inch diameter steel ball placed at the center of the top of the cylinder, with the detector placed successively at four points spaced 1/2-inch apart on the side of the specimen involving wave travel paths varying from 3.36 inches to 4.66 inches in length. Time intervals were measured using a model 217 micro-seismic timer in both laboratory and field measurements. Forty blows of the hammer were required for each velocity determination, which amounted to 80 blows on 9 laboratory specimens and 120 blows on the remaining 8 cylinders. Thirty-five field tests were made over the three selected soil types, all fine-grained, using a 2-foot seismic line with hammer-impact points at 6-inch intervals. The small tack hammer and 5/8-inch steel ball was, again, used to develop seismic wave energy. Generally, the densities obtained from the velocity measurements were lower than those measured in the conventional field testing. Conclusions were reached that: (1) the method does not appear to be usable for measurement of density of essentially fine-grained soils when the moisture content greatly exceeds the optimum for compaction, and (2) due to a gradual reduction in velocity upon aging, apparently because of gradual absorption of pore water into the expandable interlayer region of the clay, the seismic test should be conducted immediately after soil compaction to obtain a meaningful velocity value.
Resumo:
Background: There may be a considerable gap between LDL cholesterol (LDL-C) and blood pressure (BP) goal values recommended by the guidelines and results achieved in daily practice. Design Prospective cross-sectional survey of cardiovascular disease risk profiles and management with focus on lipid lowering and BP lowering in clinical practice. Methods: In phase 1, the cardiovascular risk of patients with known lipid profile visiting their general practitioner was anonymously assessed in accordance to the PROCAM-score. In phase 2, high-risk patients who did not achieve LDL-C goal less than 2.6 mmol/l in phase 1 could be further documented. Results: Six hundred thirty-five general practitioners collected the data of 23 892 patients with known lipid profile. Forty percent were high-risk patients (diabetes mellitus or coronary heart disease or PROCAM-score >20%), compared with 27% estimated by the physicians. Goal attainment rate was almost double for BP than for LDL-C in high-risk patients (62 vs. 37%). Both goals were attained by 25%. LDL-C values in phase 1 and 2 were available for 3097 high-risk patients not at LDL-C goal in phase 1; 32% of patients achieved LDL-C goal of less than 2.6 mmol/l after a mean of 17 weeks. The most successful strategies for LDL-C reduction were implemented in only 22% of the high-risk patients. Conclusion: Although patients at high cardiovascular risk were treated more intensively than low or medium risk patients, the majority remained insufficiently controlled, which is an incentive for intensified medical education. Adequate implementation of Swiss and International guidelines would expectedly contribute to improved achievement of LDL-C and BP goal values in daily practice.
Resumo:
Three standard radiation qualities (RQA 3, RQA 5 and RQA 9) and two screens, Kodak Lanex Regular and Insight Skeletal, were used to compare the imaging performance and dose requirements of the new Kodak Hyper Speed G and the current Kodak T-MAT G/RA medical x-ray films. The noise equivalent quanta (NEQ) and detective quantum efficiencies (DQE) of the four screen-film combinations were measured at three gross optical densities and compared with the characteristics for the Kodak CR 9000 system with GP (general purpose) and HR (high resolution) phosphor plates. The new Hyper Speed G film has double the intrinsic sensitivity of the T-MAT G/RA film and a higher contrast in the high optical density range for comparable exposure latitude. By providing both high sensitivity and high spatial resolution, the new film significantly improves the compromise between dose and image quality. As expected, the new film has a higher noise level and a lower signal-to-noise ratio than the standard film, although in the high frequency range this is compensated for by a better resolution, giving better DQE results--especially at high optical density. Both screen-film systems outperform the phosphor plates in terms of MTF and DQE for standard imaging conditions (Regular screen at RQA 5 and RQA 9 beam qualities). At low energy (RQA 3), the CR system has a comparable low-frequency DQE to screen-film systems when used with a fine screen at low and middle optical densities, and a superior low-frequency DQE at high optical density.
Resumo:
Multi-center studies using magnetic resonance imaging facilitate studying small effect sizes, global population variance and rare diseases. The reliability and sensitivity of these multi-center studies crucially depend on the comparability of the data generated at different sites and time points. The level of inter-site comparability is still controversial for conventional anatomical T1-weighted MRI data. Quantitative multi-parameter mapping (MPM) was designed to provide MR parameter measures that are comparable across sites and time points, i.e., 1 mm high-resolution maps of the longitudinal relaxation rate (R1 = 1/T1), effective proton density (PD(*)), magnetization transfer saturation (MT) and effective transverse relaxation rate (R2(*) = 1/T2(*)). MPM was validated at 3T for use in multi-center studies by scanning five volunteers at three different sites. We determined the inter-site bias, inter-site and intra-site coefficient of variation (CoV) for typical morphometric measures [i.e., gray matter (GM) probability maps used in voxel-based morphometry] and the four quantitative parameters. The inter-site bias and CoV were smaller than 3.1 and 8%, respectively, except for the inter-site CoV of R2(*) (<20%). The GM probability maps based on the MT parameter maps had a 14% higher inter-site reproducibility than maps based on conventional T1-weighted images. The low inter-site bias and variance in the parameters and derived GM probability maps confirm the high comparability of the quantitative maps across sites and time points. The reliability, short acquisition time, high resolution and the detailed insights into the brain microstructure provided by MPM makes it an efficient tool for multi-center imaging studies.
Resumo:
BACKGROUND: Carotid artery stenosis is associated with the occurrence of acute and chronic ischemic lesions that increase with age in the elderly population. Diffusion Imaging and ADC mapping may be an appropriate method to investigate patients with chronic hypoperfusion consecutive to carotid stenosis. This non-invasive technique allows to investigate brain integrity and structure, in particular hypoperfusion induced by carotid stenosis diseases. The aim of this study was to evaluate the impact of a carotid stenosis on the parenchyma using ADC mapping. METHODS: Fifty-nine patients with symptomatic (33) and asymptomatic (26) carotid stenosis were recruited from our multidisciplinary consultation. Both groups demonstrated a similar degree of stenosis. All patients underwent MRI of the brain including diffusion-weighted MR imaging with ADC mapping. Regions of interest were defined in the anterior and posterior paraventricular regions both ipsilateral and contralateral to the stenosis (anterior circulation). The same analysis was performed for the thalamic and occipital regions (posterior circulation). RESULTS: ADC values of the affected vascular territory were significantly higher on the side of the stenosis in the periventricular anterior (P<0.001) and posterior (P<0.01) area. There was no difference between ipsilateral and contralateral ADC values in the thalamic and occipital regions. CONCLUSIONS: We have shown that carotid stenosis is associated with significantly higher ADC values in the anterior circulation, probably reflecting an impact of chronic hypoperfusion on the brain parenchyma in symptomatic and asymptomatic patients. This is consistent with previous data in the literature.
Resumo:
The objective of this work was to evaluate sampling density on the prediction accuracy of soil orders, with high spatial resolution, in a viticultural zone of Serra Gaúcha, Southern Brazil. A digital elevation model (DEM), a cartographic base, a conventional soil map, and the Idrisi software were used. Seven predictor variables were calculated and read along with soil classes in randomly distributed points, with sampling densities of 0.5, 1, 1.5, 2, and 4 points per hectare. Data were used to train a decision tree (Gini) and three artificial neural networks: adaptive resonance theory, fuzzy ARTMap; self‑organizing map, SOM; and multi‑layer perceptron, MLP. Estimated maps were compared with the conventional soil map to calculate omission and commission errors, overall accuracy, and quantity and allocation disagreement. The decision tree was less sensitive to sampling density and had the highest accuracy and consistence. The SOM was the less sensitive and most consistent network. The MLP had a critical minimum and showed high inconsistency, whereas fuzzy ARTMap was more sensitive and less accurate. Results indicate that sampling densities used in conventional soil surveys can serve as a reference to predict soil orders in Serra Gaúcha.
Resumo:
RATIONALE, AIMS AND OBJECTIVES: There is little evidence regarding the benefit of stress ulcer prophylaxis (SUP) outside a critical care setting. Overprescription of SUP is not devoid of risks. This prospective study aimed to evaluate the use of proton pump inhibitors (PPIs) for SUP in a general surgery department. METHOD: Data collection was performed prospectively during an 8-week period on patients hospitalized in a general surgery department (58 beds) by pharmacists. Patients with a PPI prescription for the treatment of ulcers, gastro-oesophageal reflux disease, oesophagitis or epigastric pain were excluded. Patients admitted twice during the study period were not reincluded. The American Society of Health-System Pharmacists guidelines on SUP were used to assess the appropriateness of de novo PPI prescriptions. RESULTS: Among 255 patients in the study, 138 (54%) received a prophylaxis with PPI, of which 86 (62%) were de novo PPI prescriptions. A total of 129 patients (94%) received esomeprazole (according to the hospital drug policy). The most frequent dosage was at 40 mg once daily. Use of PPI for SUP was evaluated in 67 patients. A total of 53 patients (79%) had no risk factors for SUP. Twelve and two patients had one or two risk factors, respectively. At discharge, PPI prophylaxis was continued in 33% of patients with a de novo PPI prescription. CONCLUSIONS: This study highlights the overuse of PPIs in non-intensive care unit patients and the inappropriate continuation of PPI prescriptions at discharge. Treatment recommendations for SUP are needed to restrict PPI use for justified indications.
Resumo:
Context: In the milder form of primary hyperparathyroidism (PHPT), cancellous bone, represented by areal bone mineral density at the lumbar spine by dual-energy x-ray absorptiometry (DXA), is preserved. This finding is in contrast to high-resolution peripheral quantitative computed tomography (HRpQCT) results of abnormal trabecular microstructure and epidemiological evidence for increased overall fracture risk in PHPT. Because DXA does not directly measure trabecular bone and HRpQCT is not widely available, we used trabecular bone score (TBS), a novel gray-level textural analysis applied to spine DXA images, to estimate indirectly trabecular microarchitecture. Objective: The purpose of this study was to assess TBS from spine DXA images in relation to HRpQCT indices and bone stiffness in radius and tibia in PHPT. Design and Setting: This was a cross-sectional study conducted in a referral center. Patients: Participants were 22 postmenopausal women with PHPT. Main Outcome Measures: Outcomes measured were areal bone mineral density by DXA, TBS indices derived from DXA images, HRpQCT standard measures, and bone stiffness assessed by finite element analysis at distal radius and tibia. Results: TBS in PHPT was low at 1.24, representing abnormal trabecular microstructure (normal ≥1.35). TBS was correlated with whole bone stiffness and all HRpQCT indices, except for trabecular thickness and trabecular stiffness at the radius. At the tibia, correlations were observed between TBS and volumetric densities, cortical thickness, trabecular bone volume, and whole bone stiffness. TBS correlated with all indices of trabecular microarchitecture, except trabecular thickness, after adjustment for body weight. Conclusion: TBS, a measurement technology readily available by DXA, shows promise in the clinical assessment of trabecular microstructure in PHPT.
Resumo:
β-blockers and β-agonists are primarily used to treat cardiovascular diseases. Inter-individual variability in response to both drug classes is well recognized, yet the identity and relative contribution of the genetic players involved are poorly understood. This work is the first genome-wide association study (GWAS) addressing the values and susceptibility of cardiovascular-related traits to a selective β(1)-blocker, Atenolol (ate), and a β-agonist, Isoproterenol (iso). The phenotypic dataset consisted of 27 highly heritable traits, each measured across 22 inbred mouse strains and four pharmacological conditions. The genotypic panel comprised 79922 informative SNPs of the mouse HapMap resource. Associations were mapped by Efficient Mixed Model Association (EMMA), a method that corrects for the population structure and genetic relatedness of the various strains. A total of 205 separate genome-wide scans were analyzed. The most significant hits include three candidate loci related to cardiac and body weight, three loci for electrocardiographic (ECG) values, two loci for the susceptibility of atrial weight index to iso, four loci for the susceptibility of systolic blood pressure (SBP) to perturbations of the β-adrenergic system, and one locus for the responsiveness of QTc (p<10(-8)). An additional 60 loci were suggestive for one or the other of the 27 traits, while 46 others were suggestive for one or the other drug effects (p<10(-6)). Most hits tagged unexpected regions, yet at least two loci for the susceptibility of SBP to β-adrenergic drugs pointed at members of the hypothalamic-pituitary-thyroid axis. Loci for cardiac-related traits were preferentially enriched in genes expressed in the heart, while 23% of the testable loci were replicated with datasets of the Mouse Phenome Database (MPD). Altogether these data and validation tests indicate that the mapped loci are relevant to the traits and responses studied.
Resumo:
Työn tavoitteena oli saada selville ketkä ovat uuden tuotteen, eli kotihissin asiakkaat. Mitkä ovat tuotteen mahdollisuudet Suomen markkinoilla ja onko markkinoilla odotettavissa kasvua, sekä miten tähän mahdolliseen kasvuun päästäisiin käsiksi. Kotihissin ominaisuudet vaikuttavat siihen, että se on tarkoitettu asennettavaksi yksityisiin pientaloihin. Uuden tuotteen ansiosta KONE voi liittää nykyiseen vahvaan toimialaansa, eli kerrostaloihin, uuden aluevaltauksen; Pientalot. Tämä 'laajentuminen' tuo mukanaan kuluttajamarkkinoiden haasteet. Markkinatilanteen selvittämiseen käyttin jo olemassa olevaa tietoa hyväkseni; sanomalehdistä markkinatutkimuksiin. Asiakasryhmät kartoitin tutustumalla messutapahtumissa saatuihin kontakteihin. Päätutkimusmenetelmänä käytin haastatteluja. Niihin osallistui kaikkiaan 14 vastaajaa. He vasatsivat kysymyksiin ostomotiiveista ja tuotteesta. Eräs haastattelujen tärkeimmistä teemoista liittyi yhteydenpito- ja jakelukanaviin, joista 12 vastaajaa antoivat mielipiteitään. Työni tulokset viittaavat siihen, että markkinatilanne näyttää positiiviselta KONEen kannalta. Tutkimuksesta sain selville kotihissin asiakassegmentit ja myös miten nämä potentiaaliset asiakkaat saadaan tehokkaimmin tavoitettua.
Resumo:
Electrical Impedance Tomography (EIT) is an imaging method which enables a volume conductivity map of a subject to be produced from multiple impedance measurements. It has the potential to become a portable non-invasive imaging technique of particular use in imaging brain function. Accurate numerical forward models may be used to improve image reconstruction but, until now, have employed an assumption of isotropic tissue conductivity. This may be expected to introduce inaccuracy, as body tissues, especially those such as white matter and the skull in head imaging, are highly anisotropic. The purpose of this study was, for the first time, to develop a method for incorporating anisotropy in a forward numerical model for EIT of the head and assess the resulting improvement in image quality in the case of linear reconstruction of one example of the human head. A realistic Finite Element Model (FEM) of an adult human head with segments for the scalp, skull, CSF, and brain was produced from a structural MRI. Anisotropy of the brain was estimated from a diffusion tensor-MRI of the same subject and anisotropy of the skull was approximated from the structural information. A method for incorporation of anisotropy in the forward model and its use in image reconstruction was produced. The improvement in reconstructed image quality was assessed in computer simulation by producing forward data, and then linear reconstruction using a sensitivity matrix approach. The mean boundary data difference between anisotropic and isotropic forward models for a reference conductivity was 50%. Use of the correct anisotropic FEM in image reconstruction, as opposed to an isotropic one, corrected an error of 24 mm in imaging a 10% conductivity decrease located in the hippocampus, improved localisation for conductivity changes deep in the brain and due to epilepsy by 4-17 mm, and, overall, led to a substantial improvement on image quality. This suggests that incorporation of anisotropy in numerical models used for image reconstruction is likely to improve EIT image quality.
Resumo:
Tutkimuksen tavoitteena on ennakoida liiketoimintaprosessien sähköistymisen kehittymistä käyttämällä skenaariomenetelmää, yhtä laajimmin käytetyistä tulevaisuuden tutkimisen menetelmistä. Tarkastelun kohteena ovat erityisesti tulevaisuuden e-business -ratkaisut metsäteollisuudessa. Tutkimuksessa selvitetään skenaariomenetelmän ominaisuuksia, skenaariosuunnittelun periaatteita sekä menetelmän sopivuutta teknologian ja toimialan muutosten tarkasteluun. Tutkimuksen teoriaosassa selvitetään teknologian muutoksen vaikutusta toimialojen kehitykseen. Todettiin, että teknologisella muutoksella on vahva vaikutus toimialojen muutoksiin, ja että jokainen toimiala seuraa tietynlaista kehitystrajektoria. Yritysten tulee olla tietoisia teknologisen muutoksen nopeudesta ja suunnasta, ja seurata toimialansa kehityksen sääntöjä. Metsäteollisuudessa muutosten radikaali luonne sekä ICT-teknologian nopea kehitys asettavat haasteita liiketoimintaprosessien sähköistämisen kentässä. Empiriaosuudessa luotiin kolme erilaista skenaariota e-busineksen tulevaisuudesta metsäteollisuudessa. Skenaariot perustuvat pääosin aiheen asiantuntijoiden tämän hetkisiin näkemyksiin, joita koottiin skenaariotyöpajassa. Skenaarioiden muodostamisessa yhdistettiin kvalitatiivisia ja kvantitatiivisia elementtejä. Muodostetut kolme skenaariota osoittavat, että e-busineksen vaikutukset tulevaisuudessa nähdään pääosin positiivisina, ja että yritysten tulee kehittyä aktiivisesti ja joustavasti pystyäkseen hyödyntämään sähköisiä ratkaisuja tehokkaasti liiketoiminnassaan.
Resumo:
Virranmittausantureita tarvitaan monenlaisissa käyttökohteissa, joissa ne mittaavat sekä virran suuruuttaettä laatua ja toimivat osana niiden säätelyjärjestelmää. Virranmittausantureita tarvitaan myös vikatilanteiden määrittämiseen erilaisissa suojauspiireissä. Taajuusmuuttajissa virranmittaus on hyvin tärkeää ja suurista virroista sekä taajuuksista johtuen se täytyy suunnitella huolella. Tässä diplomityössä käsitellään ja tutkitaan eri virranmittausmenetelmiä, joiden avulla taajuusmuuttajan luotettava virranmittaus voidaan toteuttaa. Työssä tutkitaan eri menetelmiä virranmittauksen toteuttamiseksi, minkä jälkeen niistä valitaan sopiva menetelmä ja tutkitaan sen eri toteutusvaihtoehtoja. Sopivan toteutusvaihtoehdon valinnan jälkeen työssä suunnitellaan oma virranmittausanturi, joka sopii nimenomaisesti taajuusmuuttajakäyttöön. Suunnitellun anturin ominaisuuksia tutkitaan lopuksi simuloimalla, jonka jälkeen arvioidaan sen soveltumista käytännön sovelluksiin sekä arvioidaan erilaisia keinoja sen parantamiseksi.
Resumo:
BACKGROUND: Used in conjunction with biological surveillance, behavioural surveillance provides data allowing for a more precise definition of HIV/STI prevention strategies. In 2008, mapping of behavioural surveillance in EU/EFTA countries was performed on behalf of the European Centre for Disease prevention and Control. METHOD: Nine questionnaires were sent to all 31 member States and EEE/EFTA countries requesting data on the overall behavioural and second generation surveillance system and on surveillance in the general population, youth, men having sex with men (MSM), injecting drug users (IDU), sex workers (SW), migrants, people living with HIV/AIDS (PLWHA), and sexually transmitted infection (STI) clinics patients. Requested data included information on system organisation (e.g. sustainability, funding, institutionalisation), topics covered in surveys and main indicators. RESULTS: Twenty-eight of the 31 countries contacted supplied data. Sixteen countries reported an established behavioural surveillance system, and 13 a second generation surveillance system (combination of biological surveillance of HIV/AIDS and STI with behavioural surveillance). There were wide differences as regards the year of survey initiation, number of populations surveyed, data collection methods used, organisation of surveillance and coordination with biological surveillance. The populations most regularly surveyed are the general population, youth, MSM and IDU. SW, patients of STI clinics and PLWHA are surveyed less regularly and in only a small number of countries, and few countries have undertaken behavioural surveys among migrant or ethnic minorities populations. In many cases, the identification of populations with risk behaviour and the selection of populations to be included in a BS system have not been formally conducted, or are incomplete. Topics most frequently covered are similar across countries, although many different indicators are used. In most countries, sustainability of surveillance systems is not assured. CONCLUSION: Although many European countries have established behavioural surveillance systems, there is little harmonisation as regards the methods and indicators adopted. The main challenge now faced is to build and maintain organised and functional behavioural and second generation surveillance systems across Europe, to increase collaboration, to promote robust, sustainable and cost-effective data collection methods, and to harmonise indicators.