935 resultados para accessibility analysis tools
Resumo:
Next-generation DNA sequencing platforms can effectively detect the entire spectrum of genomic variation and is emerging to be a major tool for systematic exploration of the universe of variants and interactions in the entire genome. However, the data produced by next-generation sequencing technologies will suffer from three basic problems: sequence errors, assembly errors, and missing data. Current statistical methods for genetic analysis are well suited for detecting the association of common variants, but are less suitable to rare variants. This raises great challenge for sequence-based genetic studies of complex diseases.^ This research dissertation utilized genome continuum model as a general principle, and stochastic calculus and functional data analysis as tools for developing novel and powerful statistical methods for next generation of association studies of both qualitative and quantitative traits in the context of sequencing data, which finally lead to shifting the paradigm of association analysis from the current locus-by-locus analysis to collectively analyzing genome regions.^ In this project, the functional principal component (FPC) methods coupled with high-dimensional data reduction techniques will be used to develop novel and powerful methods for testing the associations of the entire spectrum of genetic variation within a segment of genome or a gene regardless of whether the variants are common or rare.^ The classical quantitative genetics suffer from high type I error rates and low power for rare variants. To overcome these limitations for resequencing data, this project used functional linear models with scalar response to develop statistics for identifying quantitative trait loci (QTLs) for both common and rare variants. To illustrate their applications, the functional linear models were applied to five quantitative traits in Framingham heart studies. ^ This project proposed a novel concept of gene-gene co-association in which a gene or a genomic region is taken as a unit of association analysis and used stochastic calculus to develop a unified framework for testing the association of multiple genes or genomic regions for both common and rare alleles. The proposed methods were applied to gene-gene co-association analysis of psoriasis in two independent GWAS datasets which led to discovery of networks significantly associated with psoriasis.^
Resumo:
Background and purpose. Brain lesions in acute ischemic stroke measured by imaging tools provide important clinical information for diagnosis and final infarct volume has been considered as a potential surrogate marker for clinical outcomes. Strong correlations have been found between lesion volume and clinical outcomes in the NINDS t-PA Stroke Trial but little has been published about lesion location and clinical outcomes. Studies of the National Institute of Neurological Disorders and Stroke (NINDS) t-PA Stroke Trial data found the direction of the t-PA treatment effect on a decrease in CT lesion volume was consistent with the observed clinical effects at 3 months, but measure of t-PA treatment benefits using CT lesion volumes showed a diminished statistical significance, as compared to using clinical scales. ^ Methods. We used the global test to evaluate the hypothesis that lesion locations were strongly associated with clinical outcomes within each treatment group at 3 months after stroke. The anatomic locations of CT scans were used for analysis. We also assessed the effect of t-PA on lesion location using a global statistical test. ^ Results. In the t-PA group, patients with frontal lesions had larger infarct volumes and worse NIHSS score at 3 months after stroke. The clinical status of patients with frontal lesions in t-PA group was less likely to be affected by lesion volume, as compared to those who had no frontal lesions in at 3 months. For patients within the placebo group, both brain stem and internal capsule locations were significantly associated with a lower odd of having favorable outcomes at 3 months. Using a global test we could not detect a significant effect of t-PA treatment on lesion location although differences between two treatment groups in the proportion of lesion findings in each location were found. ^ Conclusions. Frontal, brain stem, and internal capsule locations were significantly related to clinical status at 3 months after stroke onset. We detect no significant t-PA effect on all 9 locations although proportion of lesion findings in differed among locations between the two treatment groups.^
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.
Resumo:
Cryoablation for small renal tumors has demonstrated sufficient clinical efficacy over the past decade as a non-surgical nephron-sparing approach for treating renal masses for patients who are not surgical candidates. Minimally invasive percutaneous cryoablations have been performed with image guidance from CT, ultrasound, and MRI. During the MRI-guided cryoablation procedure, the interventional radiologist visually compares the iceball size on monitoring images with respect to the original tumor on separate planning images. The comparisons made during the monitoring step are time consuming, inefficient and sometimes lack the precision needed for decision making, requiring the radiologist to make further changes later in the procedure. This study sought to mitigate uncertainty in these visual comparisons by quantifying tissue response to cryoablation and providing visualization of the response during the procedure. Based on retrospective analysis of MR-guided cryoablation patient data, registration and segmentation algorithms were investigated and implemented for periprocedural visualization to deliver iceball position/size with respect to planning images registered within 3.3mm with at least 70% overlap and a quantitative logit model was developed to relate perfusion deficit in renal parenchyma visualized in verification images as a result of iceball size visualized in monitoring images. Through retrospective study of 20 patient cases, the relationship between likelihood of perfusion loss in renal parenchyma and distance within iceball was quantified and iteratively fit to a logit curve. Using the parameters from the logit fit, the margin for 95% perfusion loss likelihood was found to be 4.28 mm within the iceball. The observed margin corresponds well with the clinically accepted margin of 3-5mm within the iceball. In order to display the iceball position and perfusion loss likelihood to the radiologist, algorithms were implemented to create a fast segmentation and registration module which executed in under 2 minutes, within the clinically-relevant 3 minute monitoring period. Using 16 patient cases, the average Hausdorff distance was reduced from 10.1mm to 3.21 mm with average DSC increased from 46.6% to 82.6% before and after registration.
Resumo:
The purpose of the multiple case-study was to determine how hospital subsystems (such as physician monitoring and credentialing; quality assurance; risk management; and peer review) were supporting the monitoring of physicians? Three large metropolitan hospitals in Texas were studied and designated as hospitals #1, #2, and #3. Realizing that hospital subsystems are a unique entity and part of a larger system, conclusions were made on the premises of a quality control system, in relation to the tools of government (particularly the Health Care Quality Improvement Act (HCQIA)), and in relation to itself as a tool of a hospital.^ Three major analytical assessments were performed. First, the subsystems were analyzed as to their "completeness"; secondly, the subsystems were analyzed for "performance"; and thirdly, the subsystems were analyzed in reference to the interaction of completeness and performance.^ The physician credentialing and monitoring and the peer review subsystems as quality control systems were most complete, efficient, and effective in hospitals #1 and #3. The HCQIA did not seem to be an influencing factor in the completeness of the subsystem in hospital #1. The quality assurance and risk management subsystem in hospital #2 was not representative of completeness and performance and the HCQIA was not an influencing factor in the completeness of the Q.A. or R.M. systems in any hospital. The efficiency (computerization) of the physician credentialing, quality assurance and peer review subsystems in hospitals #1 and #3 seemed to contribute to their effectiveness (system-wide effect).^ The results indicated that the more complete, effective, and efficient subsystems were characterized by (1) all defined activities being met, (2) the HCQIA being an influencing factor, (3) a decentralized administrative structure, (4) computerization an important element, and (5) staff was sophisticated in subsystem operations. However, other variables were identified which deserve further research as to their effect on completeness and performance of subsystems. They include (1) medical staff affiliations, (2) system funding levels, (3) the system's administrative structure, and (4) the physician staff "cultural" characteristics. Perhaps by understanding other influencing factors, health care administrators may plan subsystems that will be compatible with legislative requirements and administrative objectives. ^
Resumo:
The purpose of this note is to present results of grain size analyses from 118 samples of the CRP-2/2A core using sieve and Sedigraph techniques. The samples were selected to represent the range of facies encountered, and tend to become more widely spaced with depth. Fifteen came from the upper 27 m of Quaternary and Pliocene sediments, 62 from the early Miocene-late Oligocene strata (27 to 307 mbsf), and 41 from the early Oligocene strata beneath (307 to 624 mbsf). The results are intended to provide reference data for lithological descriptions in the core logs (Cape Roberts Science Team, 1999), and to help with facies interpretation. The analytical technique used for determining size frequency of the sand fraction in our samples (sieving) is simple, physical and widely practised for over a century. Thus it provides a useful reference point for analyses produced by other faster and more sophisticated techniques, such as the Malvern laser particle size analysis system (Woolfe et al., 2000), and estimates derived from measurements taken with down-hole logging tools (Bücker, pers. com., 1999).
Resumo:
Este artículo presenta los avances de un trabajo de tesis de Magister en Tecnología Informática Aplicada en Educación de la Facultad de Informática de la UNLP, cuyo tema es “Accesibilidad digital para usuarios con limitaciones visuales y su relación con espacios virtuales de aprendizaje". 2 Aborda el tema de accesibilidad digital desde el marco teórico seleccionado y se mencionan los ejes de análisis dentro del marco del uso de las tecnologías como herramientas que favorecen la cognición. Se enuncia la propuesta de tesis y los primeros resultados. Se realiza una primera comparación donde se discuten las ventajas y desventajas de los espacios digitales al acceder mediante los lectores de pantalla, permitiendo establecer líneas de trabajo futuro.
Resumo:
Twenty-nine surface samples from the Portuguese shelf, recovered offshore from the mouths of the Ave, Douro, Lis and Mira rivers, were analysed using ICP-OES for selected major and trace elements, after total dissolution. Organic carbon, carbonate content and grain size were also determined. Five evaluation tools have been applied in order to compare the three study areas and to evaluate sediment geochemistry and other sediment compositional variability in the acquired samples: (1) empirical methods based on comparison with standard reference criteria, e.g. the NOAA sediment quality guidelines, (2) normalisation ratios using a grain-size proxy element, (3) "Gradient Method", plotting contaminant vs. organic matter or Al, (4) definition of a regional geochemical baseline from a compiled database, and (5) enrichment factors. The evaluation of element and component associations indicates differences related both to the onshore drainage areas and to the environmental shelf setting. Despite the considerable variability in total metal contents indicated by our results, the sediment metal composition is largely of natural origin. Metal enrichments observed in the Mira area are associated with the drainage of mineralised areas rich in Cu, Pb, Zn, Fe and Mn. The near absence of human impact on shelf sediments, despite the vicinity to urban areas with high industrialisation levels, such as the Ave-Douro and Lis areas, is attributed to effective trapping in the estuaries and coastal zones, as well dilution with less contaminated sediments shelf sediments and removal with fine fractions due to grain-size sorting. The character of the contaminated sediments transported to these shelf areas is further influenced by grain-size sorting as well as by dilution with less contaminated marine sediments. The results obtained individually by the different methods complement each other and allow more specific interpretations.
Resumo:
An overview is presented of the current state of knowledge on paleo-ecological aspects of calcareous dinoflagellate resting cysts. Apart from literature-based information, a discussion of new results is also provided from Equatorial Atlantic surface plankton samples, surface sediment samples and Late Quaternary sediments from two gravity cores. With the aid of redundancy analysis statistics, variations in the calcareous cyst content of both cores are correlated to variations in total organic carbon (TOC). On a global scale, the calcareous cyst distribution in bottom sediments varies with latitude and inshore-offshore gradients. In the Equatorial Atlantic Ocean, enhanced calcareous cyst production can be observed in regions and time intervals with stratified, oligotrophic conditions in the upper water masses.
Poverty Analysis of Ethiopian Females in the Amhara Region: Utilizing BMI as an Indicator of Poverty
Resumo:
This paper analyzes poverty-affected females in the Amhara region of Ethiopia. As the measurement of poverty, the paper uses body mass index (BMI) because it is one of the effective tools for measuring individual poverty level. The results of the BMI analysis show that the most poverty-affected female group is the female household heads in urban areas. The results, however, should be treated carefully considering the different social and economic structure of urban and rural areas, and the interdependent relationship between these two areas. In rural areas, access to land is the biggest issue affecting the BMI, while in urban areas, the occupation of husbands or partners is more important. These differences by area do not mean that there is no intersection between the urban and rural female groups because the majority of females in urban areas migrated from rural areas to urban areas due to various reasons such as divorce, marriage, and job opportunities.
Resumo:
Studies on the rise of global value chains (GVCs) have attracted a great deal of interest in the recent economics literature. However, due to statistical and methodological challenges, most existing research ignores domestic regional heterogeneity in assessing the impact of joining GVCs. GVCs are supported not only directly by domestic regions that export goods and services to the world market, but also indirectly by other domestic regions that provide parts, components, and intermediate services to final exporting regions. To better understand the nature of a country's position and degree of participation in GVCs, we need to fully examine the role of individual domestic regions. Understanding the domestic components of GVCs is especially important for larger economies such as China, the US, India and Japan, where there may be large variations in economic scale, geography of manufacturing, and development stages at the domestic regional level. This paper proposes a new framework for measuring domestic linkages to global value chains. This framework measures domestic linkages by endogenously embedding a target country's (e.g. China and Japan) domestic interregional input–output tables into the OECD inter-country input–output model. Using this framework, we can more clearly understand how global production is fragmented and extended internationally and domestically.
Resumo:
Transportation infrastructure is known to affect the value of real estate property by virtue of changes in accessibility. The impact of transportation facilities is highly localized as well, and it is possible that spillover effects result from the capitalization of accessibility. The objective of this study was to review the theoretical background related to spatial hedonic models and the opportunities that they provided to evaluate the effect of new transportation infrastructure. An empirical case study is presented: the Madrid Metro Line 12, known as Metrosur, in the region of Madrid, Spain. The effect of proximity to metro stations on housing prices was evaluated. The analysis took into account a host of variables, including structure, location, and neighborhood and made use of three modeling approaches: linear regression estimation with ordinary least squares, spatial error, and spatial lag. The results indicated that better accessibility to Metrosur stations had a positive impact on real estate values and that the effect was marked in cases in which a house was for sale. The results also showed the presence of submarkets, which were well defined by geographic boundaries, and transport fares, which implied that the economic benefits differed across municipalities.
Resumo:
This paper is presented in CIB: Management and Innovation Sustainable Built Environment 2011, as the study and analysis of the residential model of a rural area from the Iberian Peninsula, specifically applied to the case of the province of Cáceres, in the autonomous region of Extremadura, in Spain. To this end, from a database made up of building projects whose real costs are known, it is intended to establish the links of the different parameters studied through the corresponding functions of statistical analysis. One of the main objectives of this process is constituted by the possibility of establishing those design variables of higher economic importance, so as to keep an economic control of these parameters, generally geometrical and typological, from the very start of the project. And, in general, a higher optimization of resources in the construction of dwellings in the rural environment from their design is intended.
Resumo:
The consideration of real operating conditions for the design and optimization of a multijunction solar cell receiver-concentrator assembly is indispensable. Such a requirement involves the need for suitable modeling and simulation tools in order to complement the experimental work and circumvent its well-known burdens and restrictions. Three-dimensional distributed models have been demonstrated in the past to be a powerful choice for the analysis of distributed phenomena in single- and dual-junction solar cells, as well as for the design of strategies to minimize the solar cell losses when operating under high concentrations. In this paper, we present the application of these models for the analysis of triple-junction solar cells under real operating conditions. The impact of different chromatic aberration profiles on the short-circuit current of triple-junction solar cells is analyzed in detail using the developed distributed model. Current spreading conditions the impact of a given chromatic aberration profile on the solar cell I-V curve. The focus is put on determining the role of current spreading in the connection between photocurrent profile, subcell voltage and current, and semiconductor layers sheet resistance.
Resumo:
Incoming students in the University have education deficiencies, so universities studies require a sound basis of scientific knowledge. In this project are analysed instruments to reinforcing knowledge in those areas related to the studies that students are about to embark on public Spanish universities. There are important differences among universities and, in each university there are great differences among titles. Initial courses (cursos cero) are widespread (in 50% of universities) that selfevaluation instruments (14 % of universities). It is necessary to improve diffusion of those instruments because it is not possible to evaluate them. So are proposed the next actuations: to make regular standard surveys for professors and students; to publish results of surveys; public universities should institutionalize their basic training offer and improve the dissemination of this offer especially through the web. This paper presents a questionnaire to assess student opinion about these tools. To analyze the effectiveness, and make an initial estimate of the evaluation of these tools, we conducted a pilot test of the questionnaire with 68 students at the University of Extremadura. The results of preliminary statistical analysis conducted on the pilot test indicate that the survey results are reliable. A global evaluation of both tools, with a scale of 1 to 5, gave an average score of 3.29 for initial courses and 3.41 for selfevaluation. The 72.9% of the students consider the "self assessment" more effective than the "initial course"