817 resultados para Best-case scenario
Resumo:
Ordinal outcomes are frequently employed in diagnosis and clinical trials. Clinical trials of Alzheimer's disease (AD) treatments are a case in point using the status of mild, moderate or severe disease as outcome measures. As in many other outcome oriented studies, the disease status may be misclassified. This study estimates the extent of misclassification in an ordinal outcome such as disease status. Also, this study estimates the extent of misclassification of a predictor variable such as genotype status. An ordinal logistic regression model is commonly used to model the relationship between disease status, the effect of treatment, and other predictive factors. A simulation study was done. First, data based on a set of hypothetical parameters and hypothetical rates of misclassification was created. Next, the maximum likelihood method was employed to generate likelihood equations accounting for misclassification. The Nelder-Mead Simplex method was used to solve for the misclassification and model parameters. Finally, this method was applied to an AD dataset to detect the amount of misclassification present. The estimates of the ordinal regression model parameters were close to the hypothetical parameters. β1 was hypothesized at 0.50 and the mean estimate was 0.488, β2 was hypothesized at 0.04 and the mean of the estimates was 0.04. Although the estimates for the rates of misclassification of X1 were not as close as β1 and β2, they validate this method. X 1 0-1 misclassification was hypothesized as 2.98% and the mean of the simulated estimates was 1.54% and, in the best case, the misclassification of k from high to medium was hypothesized at 4.87% and had a sample mean of 3.62%. In the AD dataset, the estimate for the odds ratio of X 1 of having both copies of the APOE 4 allele changed from an estimate of 1.377 to an estimate 1.418, demonstrating that the estimates of the odds ratio changed when the analysis includes adjustment for misclassification. ^
Resumo:
Proton therapy is growing increasingly popular due to its superior dose characteristics compared to conventional photon therapy. Protons travel a finite range in the patient body and stop, thereby delivering no dose beyond their range. However, because the range of a proton beam is heavily dependent on the tissue density along its beam path, uncertainties in patient setup position and inherent range calculation can degrade thedose distribution significantly. Despite these challenges that are unique to proton therapy, current management of the uncertainties during treatment planning of proton therapy has been similar to that of conventional photon therapy. The goal of this dissertation research was to develop a treatment planning method and a planevaluation method that address proton-specific issues regarding setup and range uncertainties. Treatment plan designing method adapted to proton therapy: Currently, for proton therapy using a scanning beam delivery system, setup uncertainties are largely accounted for by geometrically expanding a clinical target volume (CTV) to a planning target volume (PTV). However, a PTV alone cannot adequately account for range uncertainties coupled to misaligned patient anatomy in the beam path since it does not account for the change in tissue density. In order to remedy this problem, we proposed a beam-specific PTV (bsPTV) that accounts for the change in tissue density along the beam path due to the uncertainties. Our proposed method was successfully implemented, and its superiority over the conventional PTV was shown through a controlled experiment.. Furthermore, we have shown that the bsPTV concept can be incorporated into beam angle optimization for better target coverage and normal tissue sparing for a selected lung cancer patient. Treatment plan evaluation method adapted to proton therapy: The dose-volume histogram of the clinical target volume (CTV) or any other volumes of interest at the time of planning does not represent the most probable dosimetric outcome of a given plan as it does not include the uncertainties mentioned earlier. Currently, the PTV is used as a surrogate of the CTV’s worst case scenario for target dose estimation. However, because proton dose distributions are subject to change under these uncertainties, the validity of the PTV analysis method is questionable. In order to remedy this problem, we proposed the use of statistical parameters to quantify uncertainties on both the dose-volume histogram and dose distribution directly. The robust plan analysis tool was successfully implemented to compute both the expectation value and its standard deviation of dosimetric parameters of a treatment plan under the uncertainties. For 15 lung cancer patients, the proposed method was used to quantify the dosimetric difference between the nominal situation and its expected value under the uncertainties.
Resumo:
Radiation therapy for patients with intact cervical cancer is frequently delivered using primary external beam radiation therapy (EBRT) followed by two fractions of intracavitary brachytherapy (ICBT). Although the tumor is the primary radiation target, controlling microscopic disease in the lymph nodes is just as critical to patient treatment outcome. In patients where gross lymphadenopathy is discovered, an extra EBRT boost course is delivered between the two ICBT fractions. Since the nodal boost is an addendum to primary EBRT and ICBT, the prescription and delivery must be performed considering previously delivered dose. This project aims to address the major issues of this complex process for the purpose of improving treatment accuracy while increasing dose sparing to the surrounding normal tissues. Because external beam boosts to involved lymph nodes are given prior to the completion of ICBT, assumptions must be made about dose to positive lymph nodes from future implants. The first aim of this project was to quantify differences in nodal dose contribution between independent ICBT fractions. We retrospectively evaluated differences in the ICBT dose contribution to positive pelvic nodes for ten patients who had previously received external beam nodal boost. Our results indicate that the mean dose to the pelvic nodes differed by up to 1.9 Gy between independent ICBT fractions. The second aim is to develop and validate a volumetric method for summing dose of the normal tissues during prescription of nodal boost. The traditional method of dose summation uses the maximum point dose from each modality, which often only represents the worst case scenario. However, the worst case is often an exaggeration when highly conformal therapy methods such as intensity modulated radiation therapy (IMRT) are used. We used deformable image registration algorithms to volumetrically sum dose for the bladder and rectum and created a voxel-by-voxel validation method. The mean error in deformable image registration results of all voxels within the bladder and rectum were 5 and 6 mm, respectively. Finally, the third aim explored the potential use of proton therapy to reduce normal tissue dose. A major physical advantage of protons over photons is that protons stop after delivering dose in the tumor. Although theoretically superior to photons, proton beams are more sensitive to uncertainties caused by interfractional anatomical variations, and must be accounted for during treatment planning to ensure complete target coverage. We have demonstrated a systematic approach to determine population-based anatomical margin requirements for proton therapy. The observed optimal treatment angles for common iliac nodes were 90° (left lateral) and 180° (posterior-anterior [PA]) with additional 0.8 cm and 0.9 cm margins, respectively. For external iliac nodes, lateral and PA beams required additional 0.4 cm and 0.9 cm margins, respectively. Through this project, we have provided radiation oncologists with additional information about potential differences in nodal dose between independent ICBT insertions and volumetric total dose distribution in the bladder and rectum. We have also determined the margins needed for safe delivery of proton therapy when delivering nodal boosts to patients with cervical cancer.
Resumo:
The impact of global climate change on coral reefs is expected to be most profound at the sea surface, where fertilization and embryonic development of broadcast-spawning corals takes place. We examined the effect of increased temperature and elevated CO2 levels on the in vitro fertilization success and initial embryonic development of broadcast-spawning corals using a single male:female cross of three different species from mid- and high-latitude locations: Lyudao, Taiwan (22° N) and Kochi, Japan (32° N). Eggs were fertilized under ambient conditions (27 °C and 500 µatm CO2) and under conditions predicted for 2100 (IPCC worst case scenario, 31 °C and 1000 µatm CO2). Fertilization success, abnormal development and early developmental success were determined for each sample. Increased temperature had a more profound influence than elevated CO2. In most cases, near-future warming caused a significant drop in early developmental success as a result of decreased fertilization success and/or increased abnormal development. The embryonic development of the male:female cross of A. hyacinthus from the high-latitude location was more sensitive to the increased temperature (+4 °C) than the male:female cross of A. hyacinthus from the mid-latitude location. The response to the elevated CO2 level was small and highly variable, ranging from positive to negative responses. These results suggest that global warming is a more significant and universal stressor than ocean acidification on the early embryonic development of corals from mid- and high-latitude locations.
Resumo:
The sustained absorption of anthropogenically released atmospheric CO2 by the oceans is modifying seawater carbonate chemistry, a process termed ocean acidification (OA). By the year 2100, the worst case scenario is a decline in the average oceanic surface seawater pH by 0.3 units to 7.75. The changing seawater carbonate chemistry is predicted to negatively affect many marine species, particularly calcifying organisms such as coralline algae, while species such as diatoms and fleshy seaweed are predicted to be little affected or may even benefit from OA. It has been hypothesized in previous work that the direct negative effects imposed on coralline algae, and the direct positive effects on fleshy seaweeds and diatoms under a future high CO2 ocean could result in a reduced ability of corallines to compete with diatoms and fleshy seaweed for space in the future. In a 6-week laboratory experiment, we examined the effect of pH 7.60 (pH predicted to occur due to ocean acidification just beyond the year 2100) compared to pH 8.05 (present day) on the lateral growth rates of an early successional, cold-temperate species assemblage dominated by crustose coralline algae and benthic diatoms. Crustose coralline algae and benthic diatoms maintained positive growth rates in both pH treatments. The growth rates of coralline algae were three times lower at pH 7.60, and a non-significant decline in diatom growth meant that proportions of the two functional groups remained similar over the course of the experiment. Our results do not support our hypothesis that benthic diatoms will outcompete crustose coralline algae under future pH conditions. However, while crustose coralline algae were able to maintain their presence in this benthic rocky reef species assemblage, the reduced growth rates suggest that they will be less capable of recolonizing after disturbance events, which could result in reduced coralline cover under OA conditions.
Resumo:
The purpose of this study is to determine the critical wear levels of the contact wire of the catenary on metropolitan lines. The study has focussed on the zones of contact wire where localised wear is produced, normally associated with the appearance of electric arcs. To this end, a finite element model has been developed to study the dynamics of pantograph-catenary interaction. The model includes a zone of localised wear and a singularity in the contact wire in order to simulate the worst case scenario from the point of view of stresses. In order to consider the different stages in the wire wear process, different depths and widths of the localised wear zone were defined. The results of the dynamic simulations performed for each stage of wear let the area of the minimum resistant section of the contact wire be determined for which stresses are greater than the allowable stress. The maximum tensile stress reached in the contact wire shows a clear sensitivity to the size of the local wear zone, defined by its width and depth. In this way, if the wear measurements taken with an overhead line recording vehicle are analysed, it will be possible to calculate the potential breakage risk of the wire. A strong dependence of the tensile forces of the contact wire has also been observed. These results will allow priorities to be set for replacing the most critical sections of wire, thereby making maintenance much more efficient. The results obtained show that the wire replacement criteria currently borne in mind have turned out to be appropriate, although in some wear scenarios these criteria could be adjusted even more, and so prolong the life cycle of the contact wire.
Resumo:
Systems relying on fixed hardware components with a static level of parallelism can suffer from an underuse of logical resources, since they have to be designed for the worst-case scenario. This problem is especially important in video applications due to the emergence of new flexible standards, like Scalable Video Coding (SVC), which offer several levels of scalability. In this paper, Dynamic and Partial Reconfiguration (DPR) of modern FPGAs is used to achieve run-time variable parallelism, by using scalable architectures where the size can be adapted at run-time. Based on this proposal, a scalable Deblocking Filter core (DF), compliant with the H.264/AVC and SVC standards has been designed. This scalable DF allows run-time addition or removal of computational units working in parallel. Scalability is offered together with a scalable parallelization strategy at the macroblock (MB) level, such that when the size of the architecture changes, MB filtering order is modified accordingly
Resumo:
Desentrañar el funcionamiento del cerebro es uno de los principales desafíos a los que se enfrenta la ciencia actual. Un área de estudio que ha despertado muchas expectativas e interés es el análisis de la estructura cortical desde el punto de vista morfológico, de manera que se cree una simulación del cerebro a nivel molecular. Con ello se espera poder profundizar en el estudio de numerosas enfermedades neurológicas y patológicas. Con el desarrollo de este proyecto se persigue el estudio del soma y de las espinas desde el punto de vista de la neuromorfología teórica. Es común en el estado del arte que en el análisis de las características morfológicas de una neurona en tres dimensiones el soma sea ignorado o, en el mejor de los casos, que sea sustituido por una simple esfera. De hecho, el concepto de soma resulta abstracto porque no se dispone de una dfinición estricta y robusta que especifique exactamente donde finaliza y comienzan las dendritas. En este proyecto se alcanza por primera vez una definición matemática de soma para determinar qué es el soma. Con el fin de simular somas se ahonda en los atributos utilizados en el estado del arte. Estas propiedades, de índole genérica, no especifican una morfología única. Es por ello que se propone un método que agrupe propiedades locales y globales de la morfología. En disposición de las características se procede con la categorización del cuerpo celular en distintas clases a partir de un nuevo subtipo de red bayesiana dinámica adaptada al espacio. Con ello se discute la existencia de distintas clases de somas y se descubren las diferencias entre los somas piramidales de distintas capas del cerebro. A partir del modelo matemático se simulan por primera vez somas virtuales. Algunas morfologías de espinas han sido atribuidas a ciertos comportamientos cognitivos. Por ello resulta de interés dictaminar las clases existentes y relacionarlas con funciones de la actividad cerebral. La clasificación más extendida (Peters y Kaiserman-Abramof, 1970) presenta una definición ambigua y subjetiva dependiente de la interpretación de cada individuo y por tanto discutible. Este estudio se sustenta en un conjunto de descriptores extraídos mediante una técnica de análisis topológico local para representaciones 3D. Sobre estos datos se trata de alcanzar el conjunto de clases más adecuado en el que agrupar las espinas así como de describir cada grupo mediante reglas unívocas. A partir de los resultados, se discute la existencia de un continuo de espinas y las propiedades que caracterizan a cada subtipo de espina. ---ABSTRACT---Unravel how the brain works is one of the main challenges faced by current science. A field of study which has aroused great expectations and interest is the analysis of the cortical structure from a morphological point of view, so that a molecular level simulation of the brain is achieved. This is expected to deepen the study of many neurological and pathological diseases. This project seeks the study of the soma and spines from the theoretical neuromorphology point of view. In the state of the art it is common that when it comes to analyze the morphological characteristics of a three dimension neuron the soma is ignored or, in the best case, it is replaced by a simple sphere. In fact, the concept of soma is abstract because there is not a robust and strict definition on exactly where it ends and dendrites begin. In this project a mathematical definition is reached for the first time to determine what a soma is. With the aim to simulate somas the atributes applied in the state of the art are studied. These properties, generic in nature, do not specify a unique morphology. It is why it was proposed a method to group local and global morphology properties. In arrangement of the characteristics it was proceed with the categorization of the celular body into diferent classes by using a new subtype of dynamic Bayesian network adapted to space. From the result the existance of different classes of somas and diferences among pyramidal somas from distinct brain layers are discovered. From the mathematical model virtual somas were simulated for the first time. Some morphologies of spines have been attributed to certain cognitive behaviours. For this reason it is interesting to rule the existent classes and to relate them with their functions in the brain activity. The most extended classification (Peters y Kaiserman-Abramof, 1970) presents an ambiguous and subjective definition that relies on the interpretation of each individual and consequently it is arguable. This study was based on the set of descriptors extracted from a local topological analysis technique for 3D representations. On these data it was tried to reach the most suitable set of classes to group the spines as well as to describe each cluster by unambiguous rules. From these results, the existance of a continuum of spines and the properties that characterize each spine subtype were discussed .
Resumo:
Pseudo-total (i.e. aqua regia extractable) and gastric-bioaccessible (i.e. glycine + HCl extractable) concentrations of Ca, Co, Cr, Cu, Fe, Mn, Ni, Pb and Zn were determined in a total of 48 samples collected from six community urban gardens of different characteristics in the city of Madrid (Spain). Calcium carbonate appears to be the soil property that determines the bioaccessibility of a majority of those elements, and the lack of influence of organic matter, pH and texture can be explained by their low levels in the samples (organic matter) or their narrow range of variation (pH and texture). A conservative risk assessment with bioaccessible concentrations in two scenarios, i.e. adult urban farmers and children playing in urban gardens, revealed acceptable levels of risk, but with large differences between urban gardens depending on their history of land use and their proximity to busy areas in the city center. Only in a worst-case scenario in which children who use urban gardens as recreational areas also eat the produce grown in them would the risk exceed the limits of acceptability
Resumo:
Purpose The demand of rice by the increase in population in many countries has intensified the application of pesticides and the use of poor quality water to irrigate fields. The terrestrial environment is one compartment affected by these situations, where soil is working as a reservoir, retaining organic pollutants. Therefore, it is necessary to develop methods to determine insecticides in soil and monitor susceptible areas to be contaminated, applying adequate techniques to remediate them. Materials and methods This study investigates the occurrence of ten pyrethroid insecticides (PYs) and its spatio-temporal variance in soil at two different depths collected in two periods (before plow and during rice production), in a paddy field area located in the Mediterranean coast. Pyrethroids were quantified using gas chromatography?mass spectrometry (GC?MS) after ultrasound-assisted extraction with ethyl acetate. The results obtained were assessed statistically using non-parametric methods, and significant statistical differences (p < 0.05) in pyrethroids content with soil depth and proximity to wastewater treatment plants were evaluated. Moreover, a geographic information system (GIS) was used to monitor the occurrence of PYs in paddy fields and detect risk areas. Results and discussion Pyrethroids were detected at concentrations ?57.0 ng g?1 before plow and ?62.3 ng g?1 during rice production, being resmethrin and cyfluthrin the compounds found at higher concentrations in soil. Pyrethroids were detected mainly at the top soil, and a GIS program was used to depict the obtained results, showing that effluents from wastewater treatment plants (WWTPs) were the main sources of soil contamination. No toxic effects were expected to soil organisms, but it is of concern that PYs may affect aquatic organisms, which represents the worst case scenario. Conclusions A methodology to determine pyrethroids in soil was developed to monitor a paddy field area. The use of water fromWWTPs to irrigate rice fields is one of the main pollution sources of pyrethroids. It is a matter of concern that PYs may present toxic effects on aquatic organisms, as they can be desorbed from soil. Phytoremediation may play an important role in this area, reducing the possible risk associated to PYs levels in soil.
Resumo:
We report automated DNA sequencing in 16-channel microchips. A microchip prefilled with sieving matrix is aligned on a heating plate affixed to a movable platform. Samples are loaded into sample reservoirs by using an eight-tip pipetting device, and the chip is docked with an array of electrodes in the focal plane of a four-color scanning detection system. Under computer control, high voltage is applied to the appropriate reservoirs in a programmed sequence that injects and separates the DNA samples. An integrated four-color confocal fluorescent detector automatically scans all 16 channels. The system routinely yields more than 450 bases in 15 min in all 16 channels. In the best case using an automated base-calling program, 543 bases have been called at an accuracy of >99%. Separations, including automated chip loading and sample injection, normally are completed in less than 18 min. The advantages of DNA sequencing on capillary electrophoresis chips include uniform signal intensity and tolerance of high DNA template concentration. To understand the fundamentals of these unique features we developed a theoretical treatment of cross-channel chip injection that we call the differential concentration effect. We present experimental evidence consistent with the predictions of the theory.
Resumo:
A engenharia é a ciência que transforma os conhecimentos das disciplinas básicas aplicadas a fatos reais. Nosso mundo está rodeado por essas realizações da engenharia, e é necessário que as pessoas se sintam confortáveis e seguras nas mesmas. Assim, a segurança se torna um fator importante que deve ser considerado em qualquer projeto. Na engenharia naval, um apropriado nível de segurança e, em consequência, um correto desenho estrutural é baseado, atualmente, em estudos determinísticos com o objetivo de obter estruturas capazes de suportar o pior cenário possível de solicitações durante um período de tempo determinado. A maior parte das solicitações na estrutura de um navio se deve à ação da natureza (ventos, ondas, correnteza e tempestades), ou, ainda, aos erros cometidos por humanos (explosões internas, explosões externas e colisões). Devido à aleatoriedade destes eventos, a confiabilidade estrutural de um navio deveria ser considerada como um problema estocástico sob condições ambientais bem caracterizadas. A metodologia probabilística, baseada em estatística e incertezas, oferece uma melhor perspectiva dos fenômenos reais que acontecem na estrutura dos navios. Esta pesquisa tem como objetivo apresentar resultados de confiabilidade estrutural em projetos e planejamento da manutenção para a chapa do fundo dos cascos dos navios, as quais são submetidas a esforços variáveis pela ação das ondas do mar e da corrosão. Foram estudados modelos estatísticos para a avaliação da estrutura da viga-navio e para o detalhe estrutural da chapa do fundo. Na avaliação da estrutura da viga-navio, o modelo desenvolvido consiste em determinar as probabilidades de ocorrência das solicitações na estrutura, considerando a deterioração por corrosão, com base numa investigação estatística da variação dos esforços em função das ondas e a deterioração em função de uma taxa de corrosão padrão recomendada pela DET NORSKE VERITAS (DNV). A abordagem para avaliação da confiabilidade dependente do tempo é desenvolvida com base nas curvas de resistências e solicitações (R-S) determinadas pela utilização do método de Monte Carlo. Uma variação estatística de longo prazo das adversidades é determinada pelo estudo estatístico de ondas em longo prazo e ajustada por uma distribuição com base numa vida de projeto conhecida. Constam no trabalho resultados da variação da confiabilidade ao longo do tempo de um navio petroleiro. O caso de estudo foi simplificado para facilitar a obtenção de dados, com o objetivo de corroborar a metodologia desenvolvida.
Resumo:
New technologies have transformed teaching processes and enabled new ways of study and learning. In these activities, it is suspected that the students don't make good use of new available technologies or, in the best case, they are underused. The analysis of this issue with the design of strategies to correct any defects found is the motivation that supports the development of this work and the main purpose of it. Evaluate information search habits used by the student and analyse their deduct synthesis and processing capabilities of the results found. The researchers of this study are university teachers of first year subjects, which allows them to know the information search performances by students.
Resumo:
Decision support systems (DSS) support business or organizational decision-making activities, which require the access to information that is internally stored in databases or data warehouses, and externally in the Web accessed by Information Retrieval (IR) or Question Answering (QA) systems. Graphical interfaces to query these sources of information ease to constrain dynamically query formulation based on user selections, but they present a lack of flexibility in query formulation, since the expressivity power is reduced to the user interface design. Natural language interfaces (NLI) are expected as the optimal solution. However, especially for non-expert users, a real natural communication is the most difficult to realize effectively. In this paper, we propose an NLI that improves the interaction between the user and the DSS by means of referencing previous questions or their answers (i.e. anaphora such as the pronoun reference in “What traits are affected by them?”), or by eliding parts of the question (i.e. ellipsis such as “And to glume colour?” after the question “Tell me the QTLs related to awn colour in wheat”). Moreover, in order to overcome one of the main problems of NLIs about the difficulty to adapt an NLI to a new domain, our proposal is based on ontologies that are obtained semi-automatically from a framework that allows the integration of internal and external, structured and unstructured information. Therefore, our proposal can interface with databases, data warehouses, QA and IR systems. Because of the high NL ambiguity of the resolution process, our proposal is presented as an authoring tool that helps the user to query efficiently in natural language. Finally, our proposal is tested on a DSS case scenario about Biotechnology and Agriculture, whose knowledge base is the CEREALAB database as internal structured data, and the Web (e.g. PubMed) as external unstructured information.
Resumo:
Executive Summary. Both the Commission’s proposal for a ‘Competitiveness and Convergence Instrument’ and the ‘contractual arrangement’ presented by President Van Rompuy share a common concept: associating EU money with national structural reforms under a binding arrangement. The targeted ‘structural reforms’ are the labour market reforms and product and services market reforms in eurozone ‘peripheral’ countries facing the most severe external imbalances. Their implementation would speed up and facilitate the ‘internal devaluation’ process of these countries. In the worst case scenario, failure to adopt the necessary reforms and to adjust wages and prices downwards may lead the most vulnerable countries to leave the eurozone under social and political pressure. Contracts seek to reduce this risk by increasing compliance with the country-specific recommendations for structural reforms issued by the EU institutions within the European Semester, and in particular with the Macroeconomic Imbalance Procedure (MIP). As for the financial support, it follows two different, albeit overlapping rationales. First, the perspective of obtaining EU funding would incentivize the governments of vulnerable countries to adopt reforms that would bear a high political and social cost in the short term. That is, without some form of incentive, it is unlikely that the necessary reforms would be undertaken and this could have significant negative consequences for the EMU as a whole. The second rationale amounts to outright solidarity: EU support is needed to cushion the inevitable socio-economic costs implied not only by the structural reform, but also by the internal devaluation taking place. To make sense of contractual arrangements, some points should be considered in future discussions: 1. Contracts on a voluntary basis only: Contracts cannot be mandatory unlike initially suggested in the Van Rompuy report. This stems not only from the inherent definition of a ‘contract’ – where mutual consent is key – but also from the non-binding nature of the preventive arm of the MIP. Making the country-specific recommendations issued by the EU institutions systematically binding would imply transfers of sovereignty from the national to the EU level that go well beyond the present discussion. Instead, contracts would introduce the possibility of making the preventive arm binding for some countries where corrections are most needed and urgent for the EMU as a whole.