867 resultados para Graph-based method
Resumo:
A new graph-based construction of generalized low density codes (GLD-Tanner) with binary BCH constituents is described. The proposed family of GLD codes is optimal on block erasure channels and quasi-optimal on block fading channels. Optimality is considered in the outage probability sense. Aclassical GLD code for ergodic channels (e.g., the AWGN channel,the i.i.d. Rayleigh fading channel, and the i.i.d. binary erasure channel) is built by connecting bitnodes and subcode nodes via a unique random edge permutation. In the proposed construction of full-diversity GLD codes (referred to as root GLD), bitnodes are divided into 4 classes, subcodes are divided into 2 classes, and finally both sides of the Tanner graph are linked via 4 random edge permutations. The study focuses on non-ergodic channels with two states and can be easily extended to channels with 3 states or more.
Resumo:
This paper deals with the problem of spatial data mapping. A new method based on wavelet interpolation and geostatistical prediction (kriging) is proposed. The method - wavelet analysis residual kriging (WARK) - is developed in order to assess the problems rising for highly variable data in presence of spatial trends. In these cases stationary prediction models have very limited application. Wavelet analysis is used to model large-scale structures and kriging of the remaining residuals focuses on small-scale peculiarities. WARK is able to model spatial pattern which features multiscale structure. In the present work WARK is applied to the rainfall data and the results of validation are compared with the ones obtained from neural network residual kriging (NNRK). NNRK is also a residual-based method, which uses artificial neural network to model large-scale non-linear trends. The comparison of the results demonstrates the high quality performance of WARK in predicting hot spots, reproducing global statistical characteristics of the distribution and spatial correlation structure.
Resumo:
This final thesis focused on experimental knowledge of working in the group. The topic was approached from the perspective of the theatre group director. The thesis itself contains two practical theatre group examples. One is a youngsters' theatre group called La Drama, directed by the author during 2004/2005. The other one is the author's own final artistic production, named Protasio - Kohtaamisia Afrikassa (Protasio - Meeting in Africa), which opened in January 2006. The group leader's role is presented as an important personal tool in theatre, and one that it is important to develop in a professional way. The director's role is to be the promoter of the process, the inspirer and a strong group leader. A person who is in this role will lead the group through the creative process using the community. The thesis identifies and advocates good practice, and ideas of how to create theatre with a group-based method. These analyses can offer useful ideas and knowledge to the beginner in the field of theatre direction.
Resumo:
Decision to revascularize a patient with stable coronary artery disease should be based on the detection of myocardial ischemia. If this decision can be straightforward with significant stenosis or in non-significant stenosis, the decision with intermediate stenosis is far more difficult and require invasive measures of functional impact of coronary stenosis on maximal blood (flow fractional flow reserve=FFR). A recent computer based method has been developed and is able to measure FFR with data acquired during a standard coronary CT-scan (FFRcT). Two recent clinical studies (DeFACTO and DISCOVER-FLOW) show that diagnostic performance of FFRcT was associated with improved diagnostic accuracy versus standard coronary CT-scan for the detection of myocardial ischemia although FFRcT need further development.
Resumo:
The Network Revenue Management problem can be formulated as a stochastic dynamic programming problem (DP or the\optimal" solution V *) whose exact solution is computationally intractable. Consequently, a number of heuristics have been proposed in the literature, the most popular of which are the deterministic linear programming (DLP) model, and a simulation based method, the randomized linear programming (RLP) model. Both methods give upper bounds on the optimal solution value (DLP and PHLP respectively). These bounds are used to provide control values that can be used in practice to make accept/deny decisions for booking requests. Recently Adelman [1] and Topaloglu [18] have proposed alternate upper bounds, the affine relaxation (AR) bound and the Lagrangian relaxation (LR) bound respectively, and showed that their bounds are tighter than the DLP bound. Tight bounds are of great interest as it appears from empirical studies and practical experience that models that give tighter bounds also lead to better controls (better in the sense that they lead to more revenue). In this paper we give tightened versions of three bounds, calling themsAR (strong Affine Relaxation), sLR (strong Lagrangian Relaxation) and sPHLP (strong Perfect Hindsight LP), and show relations between them. Speciffically, we show that the sPHLP bound is tighter than sLR bound and sAR bound is tighter than the LR bound. The techniques for deriving the sLR and sPHLP bounds can potentially be applied to other instances of weakly-coupled dynamic programming.
Resumo:
Accurate detection of subpopulation size determinations in bimodal populations remains problematic yet it represents a powerful way by which cellular heterogeneity under different environmental conditions can be compared. So far, most studies have relied on qualitative descriptions of population distribution patterns, on population-independent descriptors, or on arbitrary placement of thresholds distinguishing biological ON from OFF states. We found that all these methods fall short of accurately describing small population sizes in bimodal populations. Here we propose a simple, statistics-based method for the analysis of small subpopulation sizes for use in the free software environment R and test this method on real as well as simulated data. Four so-called population splitting methods were designed with different algorithms that can estimate subpopulation sizes from bimodal populations. All four methods proved more precise than previously used methods when analyzing subpopulation sizes of transfer competent cells arising in populations of the bacterium Pseudomonas knackmussii B13. The methods' resolving powers were further explored by bootstrapping and simulations. Two of the methods were not severely limited by the proportions of subpopulations they could estimate correctly, but the two others only allowed accurate subpopulation quantification when this amounted to less than 25% of the total population. In contrast, only one method was still sufficiently accurate with subpopulations smaller than 1% of the total population. This study proposes a number of rational approximations to quantifying small subpopulations and offers an easy-to-use protocol for their implementation in the open source statistical software environment R.
Resumo:
BACKGROUND: Hyperoxaluria is a major risk factor for kidney stone formation. Although urinary oxalate measurement is part of all basic stone risk assessment, there is no standardized method for this measurement. METHODS: Urine samples from 24-h urine collection covering a broad range of oxalate concentrations were aliquoted and sent, in duplicates, to six blinded international laboratories for oxalate, sodium and creatinine measurement. In a second set of experiments, ten pairs of native urine and urine spiked with 10 mg/L of oxalate were sent for oxalate measurement. Three laboratories used a commercially available oxalate oxidase kit, two laboratories used a high-performance liquid chromatography (HPLC)-based method and one laboratory used both methods. RESULTS: Intra-laboratory reliability for oxalate measurement expressed as intraclass correlation coefficient (ICC) varied between 0.808 [95% confidence interval (CI): 0.427-0.948] and 0.998 (95% CI: 0.994-1.000), with lower values for HPLC-based methods. Acidification of urine samples prior to analysis led to significantly higher oxalate concentrations. ICC for inter-laboratory reliability varied between 0.745 (95% CI: 0.468-0.890) and 0.986 (95% CI: 0.967-0.995). Recovery of the 10 mg/L oxalate-spiked samples varied between 8.7 ± 2.3 and 10.7 ± 0.5 mg/L. Overall, HPLC-based methods showed more variability compared to the oxalate oxidase kit-based methods. CONCLUSIONS: Significant variability was noted in the quantification of urinary oxalate concentration by different laboratories, which may partially explain the differences of hyperoxaluria prevalence reported in the literature. Our data stress the need for a standardization of the method of oxalate measurement.
Resumo:
Purpose: To perform in vivo imaging of the cerebellum with an in-plane resolution of 120 mm to observe its cortical granular and molecular layers by taking advantage of the high signal-to-noise ratio and the increased magnetic susceptibility-related contrast available at high magnetic field strength such as 7 T. Materials and Methods: The study was approved by the institutional review board, and all patients provided written consent. Three healthy persons (two men, one woman; mean age, 30 years; age range, 28-31 years) underwent MR imaging with a 7-T system. Gradient-echo images (repetition time msec/echo time msec, 1000/25) of the human cerebellum were acquired with a nominal in-plane resolution of approximately 120 mum and a section thickness of 1 mm. Results: Structures with dimensions as small as 240 mum, such as the granular and molecular layers in the cerebellar cortex, were detected in vivo. The detection of these structures was confirmed by comparing the contrast obtained on T2*-weighted and phase images with that obtained on images of rat cerebellum acquired at 14 T with 30 mum in-plane resolution. Conclusion: In vivo cerebellar imaging at near-microscopic resolution is feasible at 7 T. Such detailed observation of an anatomic area that can be affected by a number of neurologic and psychiatric diseases, such as stroke, tumors, autism, and schizophrenia, could potentially provide newer markers for diagnosis and follow-up in patients with such pathologic conditions. (c) RSNA, 2010.
Resumo:
INTRODUCTION: Currently, there is no reliable method to differentiate acute from chronic carotid occlusion. We propose a novel CTA-based method to differentiate acute from chronic carotid occlusions that could potentially aid clinical management of patients. METHODS: We examined 72 patients with 89 spontaneously occluded extracranial internal carotids with CT angiography (CTA). All occlusions were confirmed by another imaging modality and classified as acute (imaging <1 week of presumed occlusion) orchronic (imaging >4 weeks), based on circumstantial clinical and radiological evidence. A neuroradiologist and a neurologist blinded to clinical information determined the site of occlusion on axial sections of CTA. They also looked for (a) hypodensity in the carotid artery (thrombus), (b) contrast within the carotid wall (vasa vasorum), (c) the site of the occluded carotid, and (d) the "carotid ring sign" (defined as presence of a and/or b). RESULTS: Of 89 occluded carotids, 24 were excluded because of insufficient circumstantial evidence to determine timing of occlusion, 4 because of insufficient image quality, and 3 because of subacute timing of occlusion. Among the remaining 45 acute and 13 chronic occlusions, inter-rater agreement (kappa) for the site of proximal occlusion was 0.88, 0.45 for distal occlusion, 0.78 for luminal hypodensity, 0.82 for wall contrast, and 0.90 for carotid ring sign. The carotid ring sign had 88.9% sensitivity, 69.2% specificity, and 84.5% accuracy to diagnose acute occlusion. CONCLUSION: The carotid ring sign helps to differentiate acute from chronic carotid occlusion. If further confirmed, this information may be helpful in studying ischemic symptoms and selecting treatment strategies in patients with carotid occlusions.
Resumo:
Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation‑based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi‑resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Among the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, have the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical‑based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data.
Resumo:
In this letter, we obtain the Maximum LikelihoodEstimator of position in the framework of Global NavigationSatellite Systems. This theoretical result is the basis of a completelydifferent approach to the positioning problem, in contrastto the conventional two-steps position estimation, consistingof estimating the synchronization parameters of the in-viewsatellites and then performing a position estimation with thatinformation. To the authors’ knowledge, this is a novel approachwhich copes with signal fading and it mitigates multipath andjamming interferences. Besides, the concept of Position–basedSynchronization is introduced, which states that synchronizationparameters can be recovered from a user position estimation. Weprovide computer simulation results showing the robustness ofthe proposed approach in fading multipath channels. The RootMean Square Error performance of the proposed algorithm iscompared to those achieved with state-of-the-art synchronizationtechniques. A Sequential Monte–Carlo based method is used todeal with the multivariate optimization problem resulting fromthe ML solution in an iterative way.
Resumo:
Venäjän voimakas talouskasvu on lisännyt merkittävästi myös logistiikkapalveluiden kysyntää maassa. Suuren pinta-alan, markkinapotentiaalin ja Euroopan ja Aasian välisen maantieteellisen sijainnin tarjoaman merkittävän kauttakulkupotentiaalin ansiosta Venäjä on erityisen kiinnostava investointikohde logistiikkayrityksille. Talouskasvu ei ole kuitenkaan jakautunut tasaisesti ja tämän vuoksi investoijan onkin syytä tarkoin selvittää minne Venäjän sisällä sijoittua. Tässä työssä on kartoitettu Venäjän logistiikkasektoria ja liikenneinfrastruktuuria sekä tarkasteltu kilpailutilannetta tavoitteena löytää logistiikkayrityksellekeskeisimmät sijaintikohteet Venäjällä. Moskova ja Pietari on rajattu tutkimuksen ulkopuolelle. Alan kirjallisuuteen ja aiheesta aiemmin laadittuihin tutkimuksiin perehtymällä on selvitetty yritysten sijaintipaikanvalinnalle asettamat tärkeimmät kriteerit. Logistiikkayrityksille toimivalla infrastruktuurilla on luonnollisesti oleellinen merkitys sijaintipaikkaa valittaessa, riittävän suuren markkinapotentiaalin ollessa toinen erittäin merkittävä kriteeri. Kirjallisuustarkastelun pohjalta potentiaalisiksi kohteiksi logistiikkayritykselle valikoitui 22 kaupunkia eri puolilta Venäjää. Monikriteerianalyysiin perustuvan pisteytysmenetelmän avulla toteutetussa potentiaalisten kohteiden luokittelussa kiinnostavimmiksi sijoittumiskohteiksi kohosivat Etelä-Venäjällä, Mustanmeren rannikolla sijaitsevat merkittävät satamakaupungit. Kehittyneen infrastruktuurin, merkittävän markkinapotentiaalin sekä suotuisten taloudellisten toimintaedellytysten ansiosta tarjoavat nämä erityisen merkittävän investointipotentiaalin logistiikkayrityksille.
Resumo:
Tutkimuksen tarkoituksena oli tutkia johdon näkökulmasta toimintojohtamisen soveltuvuutta Pirkanmaan Osuuskaupan päivittäistavarakaupan johtamiseen. Tavoitteena oli rakentaa esimerkkiyritykselle toimintolähtöinen organisaatiomalli ja tutkia organisaation muutoksen edellytyksiä sekä strategian vaikutusta organisaatiomallin valintaan. Tutkimuksessa käytetään konstruktiivista tutkimustapaa. Tutkimus on teoreettinen ja toiminta-analyyttinen case-tutkimus. Toimintojohtaminen, jonka tavoitteena on liiketoiminnan horisontaalinen ohjaus, ei sovellu päivittäistavarakaupan johtamiseen. Toimintojohtaminen ei lisää Pirkanmaan Osuuskaupan market-kaupan kilpailukykyä, koska sen resurssitarve on suuri ja näin ollen se on kustannustehokkuus-strategian vastainen. Organisaatiomuutos edellyttää erityisesti johdolta strategista ajattelua ja kykyä nähdä nykyisen toimintamallin uhat tulevaisuuden kilpailuympäristössä ja toisaalta nähdä tulevaisuuden menestystekijät, joita toteuttamalla yritys menestyy jatkossakin. Johdon tulee pystyä viestimään henkilöstölle muutoksen välttämättömyys ja kuva tulevaisuuden toimintamallista. Johdon rooli ja tuki organisaatiolle muutosprosessissa on keskeistä. Strategia on keino toteuttaa yrityksen visio, päämäärät ja tavoitteet. Toiminnan organisointi on keskeisin keino strategisten päämäärien ja tavoitteiden toteuttamiseksi.
Resumo:
Tutkielman tavoitteena on teoriaosassa esitellä pankkien vakavaraisuussäännöksien ja riskienhallinnan perusperiaatteet, perehtyä nykyiseen Basel I -järjestelmään ja sen uudistukseen eli Basel II -kehikkoon. Tutkielmassa keskitytään uuden järjestelmän ensimmäiseen pilariin ja sen mukaisiin minipääomavaatimuksiin. Tarkastelussa on tarkemmin luottoriskin vakavaraisuusvaatimusten mukaiset minimipääoman laskentamenetelmät, standardimenetelmä ja sisäisten luottoluokitusten menetelmä. Standardi-menetelmä käyttää hyväkseen ulkoisia luottoluokituksia, kun taas kehittyneempi sisäisten luottoluokitusten menetelmä hyödyntää pankkien omia tietojärjestelmiä ja näiden tuottamia estimaattejaasiakkaiden luottokelpoisuudesta. Tutkielman empiirisessä osassa tutkitaan esimerkkipankin avulla luottoriskin vakavaraisuusvaatimusten laskentaa Basel I -järjestelmällä ja Basel II -laskentamenetelmillä. Sisäisten luottoluokitusten menetelmän mukaisesti pankille määritetään tase-erien riskipainot ja tutkitaan myös, olisiko pankin mahdollista nykyisellä taserakenteellaan saavuttaa suurempi tulos optimoimalla riskiprofiiliaan käyttäessään kehittyneempää sisäisten luottoluokitusten menetelmää standardimenetelmän sijaan.
Resumo:
VariScan is a software package for the analysis of DNA sequence polymorphisms at the whole genome scale. Among other features, the software:(1) can conduct many population genetic analyses; (2) incorporates a multiresolution wavelet transform-based method that allows capturing relevant information from DNA polymorphism data; and (3) it facilitates the visualization of the results in the most commonly used genome browsers.