838 resultados para Accelerated failure time Model. Correlated data. Imputation. Residuals analysis
Resumo:
Sediments in the North Atlantic ocean contain as eries of layers that are rich in ice-rafted debris and unusally poor in foraminifera. Here we present evidence that the most recent six of the 'Heinrich layers', deposited between 14,000 and 70,000 years ago, record marked decreases in sea surface temperature and salinity, decreases in the flux of planktonic forminifera to the sediments, and short-lived, massive discharges of icebergs originating in eastern Canada. The path of the icebergs, clearly marked by the presence of ice-rafted detrital carbonate, can be traced for more than 3,000 km - a remarkable distance, attesting to extreme cooling of surface waters and enormous amounts of drifiting ice. The cause of these extreme events is puzzling. They may reflect repated rapid advances of the Laurentide ice sheet, perhaps associated with reductions in air temperatures, yet temperature records from Greenland ice cores appear to exhibit only a weak corresponding signal. Moreover, the 5-10,000-yr intervals between the events are inconsistent with Milankovitch orbital periodicities, raising the question of what the ultimate cause of the postulated cooling may have been.
Resumo:
Climatic changes cause alterations in circulation patterns of the world oceans. The highly saline Mediterranean Outflow Water (MOW), built within the Mediterranean Sea crosses the Strait of Gibraltar in westerly directions, turning north-westward to stick to the Iberian Slope within 600-1500m water depths. Circulation pattern and current speed of the MOW are strongly influenced by climatically induced variations and thus control sedimentation processes along the South- and West - Iberian Continental Slope. Sedimentation characteristics of the investigated area are therefore suitable to reconstruct temporal hydrodynamic changes of the MOW. Detailed investigations on the silt-sized grain distribution, physical properties and hydroacoustic data were performed to recalculate paleo-current-velocities and to understand the sedimentation history in the Golf of Cadiz and the Portuguese Continental Slope. A time model based on d18Odata and 14C-datings of planktic foraminifera allowed the stratigraphical classification of the core material and thus the dating of the current induced sediment layers showing the variations of paleo-current intensities. The evaluation and interpretation of the gathered data sets enabled us to reconstruct lateral and temporal sedimentation patterns of the MOW for the Holocene and the late Pleistocene, back to the Last Glacial Maximum (LGM).
Resumo:
The section of CN railway between Vancouver and Kamloops runs along the base of many hazardous slopes, including the White Canyon, which is located just outside the town of Lytton, BC. The slope has a history of frequent rockfall activity, which presents a hazard to the railway below. Rockfall inventories can be used to understand the frequency-magnitude relationship of events on hazardous slopes, however it can be difficult to consistently and accurately identify rockfall source zones and volumes on large slopes with frequent activity, leaving many inventories incomplete. We have studied this slope as a part of the Canadian Railway Ground Hazard Research Program and have collected remote sensing data, including terrestrial laser scanning (TLS), photographs, and photogrammetry data since 2012, and used change detection to identify rockfalls on the slope. The objective of this thesis is to use a subset of this data to understand how rockfalls identified from TLS data could be used to understand the frequency-magnitude relationship of rockfalls on the slope. This includes incorporating both new and existing methods to develop a semi-automated workflow to extract rockfall events from the TLS data. We show that these methods can be used to identify events as small as 0.01 m3 and that the duration between scans can have an effect on the frequency-magnitude relationship of the rockfalls. We also show that by incorporating photogrammetry data into our analysis, we can create a 3D geological model of the slope and use this to classify rockfalls by lithology, to further understand the rockfall failure patterns. When relating the rockfall activity to triggering factors, we found that the amount of precipitation occurring over the winter has an effect on the overall rockfall frequency for the remainder of the year. These results can provide the railways with a more complete inventory of events compared to records created through track inspection, or rockfall monitoring systems that are installed on the slope. In addition, we can use the database to understand the spatial and temporal distribution of events. The results can also be used as an input to rockfall modelling programs.
Resumo:
The need for continuous recording rain gauges makes it difficult to determine the rainfall erosivity factor (R-factor) of the (R)USLE model in areas without good temporal data coverage. In mainland Spain, the Nature Conservation Institute (ICONA) determined the R-factor at few selected pluviographs, so simple estimates of the R-factor are definitely of great interest. The objectives of this study were: (1) to identify a readily available estimate of the R-factor for mainland Spain; (2) to discuss the applicability of a single (global) estimate based on analysis of regional results; (3) to evaluate the effect of record length on estimate precision and accuracy; and (4) to validate an available regression model developed by ICONA. Four estimators based on monthly precipitation were computed at 74 rainfall stations throughout mainland Spain. The regression analysis conducted at a global level clearly showed that modified Fournier index (MFI) ranked first among all assessed indexes. Applicability of this preliminary global model across mainland Spain was evaluated by analyzing regression results obtained at a regional level. It was found that three contiguous regions of eastern Spain (Catalonia, Valencian Community and Murcia) could have a different rainfall erosivity pattern, so a new regression analysis was conducted by dividing mainland Spain into two areas: Eastern Spain and plateau-lowland area. A comparative analysis concluded that the bi-areal regression model based on MFI for a 10-year record length provided a simple, precise and accurate estimate of the R-factor in mainland Spain. Finally, validation of the regression model proposed by ICONA showed that R-ICONA index overpredicted the R-factor by approximately 19%.
Resumo:
The aim of this work was to track and verify the delivery of respiratory-gated irradiations, performed with three versions of TrueBeam linac, using a novel phantom arrangement that combined the OCTAVIUS® SRS 1000 array with a moving platform. The platform was programmed to generate sinusoidal motion of the array. This motion was tracked using the real-time position management (RPM) system and four amplitude gating options were employed to interrupt MV beam delivery when the platform was not located within set limits. Time-resolved spatial information extracted from analysis of x-ray fluences measured by the array was compared to the programmed motion of the platform and to the trace recorded by the RPM system during the delivery of the x-ray field. Temporal data recorded by the phantom and the RPM system were validated against trajectory log files, recorded by the linac during the irradiation, as well as oscilloscope waveforms recorded from the linac target signal. Gamma analysis was employed to compare time-integrated 2D x-ray dose fluences with theoretical fluences derived from the probability density function for each of the gating settings applied, where gamma criteria of 2%/2 mm, 1%/1 mm and 0.5%/0.5 mm were used to evaluate the limitations of the RPM system. Excellent agreement was observed in the analysis of spatial information extracted from the SRS 1000 array measurements. Comparisons of the average platform position with the expected position indicated absolute deviations of <0.5 mm for all four gating settings. Differences were observed when comparing time-resolved beam-on data stored in the RPM files and trajectory logs to the true target signal waveforms. Trajectory log files underestimated the cycle time between consecutive beam-on windows by 10.0 ± 0.8 ms. All measured fluences achieved 100% pass-rates using gamma criteria of 2%/2 mm and 50% of the fluences achieved pass-rates >90% when criteria of 0.5%/0.5 mm were used. Results using this novel phantom arrangement indicate that the RPM system is capable of accurately gating x-ray exposure during the delivery of a fixed-field treatment beam.
Resumo:
Background: Reablement, also known as restorative care, is one possible approach to home-care services for older adults at risk of functional decline. Unlike traditional home-care services, reablement is frequently time-limited (usually six to 12 weeks) and aims to maximise independence by offering an intensive multidisciplinary, person-centred and goal-directed intervention. Objectives:Objectives To assess the effects of time-limited home-care reablement services (up to 12 weeks) for maintaining and improving the functional independence of older adults (aged 65 years or more) when compared to usual home-care or wait-list control group. Search methods:We searched the following databases with no language restrictions during April to June 2015: the Cochrane Central Register of Controlled Trials (CENTRAL); MEDLINE (OvidSP); Embase (OvidSP); PsycINFO (OvidSP); ERIC; Sociological Abstracts; ProQuest Dissertations and Theses; CINAHL (EBSCOhost); SIGLE (OpenGrey); AgeLine and Social Care Online. We also searched the reference lists of relevant studies and reviews as well as contacting authors in the field.Selection criteria:We included randomised controlled trials (RCTs), cluster randomised or quasi-randomised trials of time-limited reablement services for older adults (aged 65 years or more) delivered in their home; and incorporated a usual home-care or wait-list control group. Data collection and analysis:Two authors independently assessed studies for inclusion, extracted data, assessed the risk of bias of individual studies and considered quality of the evidence using GRADE. We contacted study authors for additional information where needed.Main results:Two studies, comparing reablement with usual home-care services with 811 participants, met our eligibility criteria for inclusion; we also identified three potentially eligible studies, but findings were not yet available. One included study was conducted in Western Australia with 750 participants (mean age 82.29 years). The second study was conducted in Norway (61 participants; mean age 79 years). We are very uncertain as to the effects of reablement compared with usual care as the evidence was of very low quality for all of the outcomes reported. The main findings were as follows. Functional status: very low quality evidence suggested that reablement may be slightly more effective than usual care in improving function at nine to 12 months (lower scores reflect greater independence; standardised mean difference (SMD) -0.30; 95% confidence interval (CI) -0.53 to -0.06; 2 studies with 249 participants). Adverse events: reablement may make little or no difference to mortality at 12 months’ follow-up (RR 0.97; 95% CI 0.74 to 1.29; 2 studies with 811 participants) or rates of unplanned hospital admission at 24 months (RR 0.94; 95% CI 0.85 to 1.03; 1 study with 750 participants). The very low quality evidence also means we are uncertain whether reablement may influence quality of life (SMD -0.23; 95% CI -0.48 to 0.02; 2 trials with 249 participants) or living arrangements (RR 0.92, 95% CI 0.62 to 1.34; 1 study with 750 participants) at time points up to 12 months. People receiving reablement may be slightly less likely to have been approved for a higher level of personal care than people receiving usual care over the 24 months’ follow-up (RR 0.87; 95% CI 0.77 to 0.98; 1 trial, 750 participants). Similarly, although there may be a small reduction in total aggregated home and healthcare costs over the 24-month follow-up (reablement: AUD 19,888; usual care: AUD 22,757; 1 trial with 750 participants), we are uncertain about the size and importance of these effects as the results were based on very low quality evidence. Neither study reported user satisfaction with the serviceAuthors’ conclusions:There is considerable uncertainty regarding the effects of reablement as the evidence was of very low quality according to our GRADE ratings. Therefore, the effectiveness of reablement services cannot be supported or refuted until more robust evidence becomes available. There is an urgent need for high quality trials across different health and social care systems due to the increasingly high profile of reablement services in policy and practice in several countries.
Resumo:
Companies face new challenges almost every day. In order to stay competitive, it is important that companies strive for continuous development and improvement. By describing companies through their processes it is possible to get a clear overview of the entire operation, which can contribute, to a well-established overall understanding of the company. This is a case study based on Stort AB which is a small logistics company specialized in international transportation and logistics solutions. The purpose of this study is to perform value stream mapping in order to create a more efficient production process and propose possible improvements in order to reduce processing time. After performing value stream mapping, data envelopment analysis is used to calculate how lean Stort AB is today and how lean the company can become by implementing the proposed improvements. The results show that the production process can improve efficiency by minimizing waste produced by a bad workplace layout and over-processing. The authors suggested solution is to introduce standardized processes and invest in technical instruments in order to automate the process to reduce process time. According to data envelopment analysis the business is 41 percent lean at present and may soon become 55 percent lean and finally reach an optimum 100 percent lean mode if the process is automated.
Resumo:
The PolySMART demonstration system SP1b has been modeled in TRNSYS and calibrated against monitored data. The system is an example of distributed cooling with centralized CHP, where the driving heat is delivered via the district heating network. The system pre-cools the cooling water for the head office of Borlänge municipality, for which the main cooling is supplied by a 200 kW compression chiller. The SP1b system thus provides pre-cooling. It consists of ClimateWell TDC with nominal capacity of 10 kW together with a dry cooler for recooling and heat exchangers in the cooling and driving circuits. The cooling system is only operated from 06:00 to 17:00 during working days, and the cooling season is generally from mid May to mid September. The nominal operating conditions of the main chiller are 12/15°C. The main aims of this simulation study were to: reduce the electricity consumption, and if possible to improve the thermal COP and capacity at the same time; and to study how the system would perform with different boundary conditions such as climate and load. The calibration of the system model was made in three stages: estimation of parameters based on manufacturer data and dimensions of the system; calibration of each circuit (pipes and heat exchangers) separately using steady state point; and finally calibration of the complete model in terms of thermal and electrical energy as well as running times, for a five day time series of data with one minute average data values. All the performance figures were with 3% of the measured values apart from the running time for the driving circuit that was 4% different. However, the performance figures for this base case system for the complete cooling season of mid-May to midSeptember were significantly better than those for the monitoring data. This was attributed to long periods when the monitored system was not in operation and due to a control parameter that hindered cold delivery at certain times.
Resumo:
Mathematical models are increasingly used in environmental science thus increasing the importance of uncertainty and sensitivity analyses. In the present study, an iterative parameter estimation and identifiability analysis methodology is applied to an atmospheric model – the Operational Street Pollution Model (OSPMr). To assess the predictive validity of the model, the data is split into an estimation and a prediction data set using two data splitting approaches and data preparation techniques (clustering and outlier detection) are analysed. The sensitivity analysis, being part of the identifiability analysis, showed that some model parameters were significantly more sensitive than others. The application of the determined optimal parameter values was shown to succesfully equilibrate the model biases among the individual streets and species. It was as well shown that the frequentist approach applied for the uncertainty calculations underestimated the parameter uncertainties. The model parameter uncertainty was qualitatively assessed to be significant, and reduction strategies were identified.
Resumo:
One of the most challenging task underlying many hyperspectral imagery applications is the spectral unmixing, which decomposes a mixed pixel into a collection of reectance spectra, called endmember signatures, and their corresponding fractional abundances. Independent Component Analysis (ICA) have recently been proposed as a tool to unmix hyperspectral data. The basic goal of ICA is to nd a linear transformation to recover independent sources (abundance fractions) given only sensor observations that are unknown linear mixtures of the unobserved independent sources. In hyperspectral imagery the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be independent. This paper address hyperspectral data source dependence and its impact on ICA performance. The study consider simulated and real data. In simulated scenarios hyperspectral observations are described by a generative model that takes into account the degradation mechanisms normally found in hyperspectral applications. We conclude that ICA does not unmix correctly all sources. This conclusion is based on the a study of the mutual information. Nevertheless, some sources might be well separated mainly if the number of sources is large and the signal-to-noise ratio (SNR) is high.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
In the last thirty years, the emergence and progression of biologging technology has led to great advances in marine predator ecology. Large databases of location and dive observations from biologging devices have been compiled for an increasing number of diving predator species (such as pinnipeds, sea turtles, seabirds and cetaceans), enabling complex questions about animal activity budgets and habitat use to be addressed. Central to answering these questions is our ability to correctly identify and quantify the frequency of essential behaviours, such as foraging. Despite technological advances that have increased the quality and resolution of location and dive data, accurately interpreting behaviour from such data remains a challenge, and analytical methods are only beginning to unlock the full potential of existing datasets. This review evaluates both traditional and emerging methods and presents a starting platform of options for future studies of marine predator foraging ecology, particularly from location and two-dimensional (time-depth) dive data. We outline the different devices and data types available, discuss the limitations and advantages of commonly-used analytical techniques, and highlight key areas for future research. We focus our review on pinnipeds - one of the most studied taxa of marine predators - but offer insights that will be applicable to other air-breathing marine predator tracking studies. We highlight that traditionally-used methods for inferring foraging from location and dive data, such as first-passage time and dive shape analysis, have important caveats and limitations depending on the nature of the data and the research question. We suggest that more holistic statistical techniques, such as state-space models, which can synthesise multiple track, dive and environmental metrics whilst simultaneously accounting for measurement error, offer more robust alternatives. Finally, we identify a need for more research to elucidate the role of physical oceanography, device effects, study animal selection, and developmental stages in predator behaviour and data interpretation.
Resumo:
This thesis presents quantitative studies of T cell and dendritic cell (DC) behaviour in mouse lymph nodes (LNs) in the naive state and following immunisation. These processes are of importance and interest in basic immunology, and better understanding could improve both diagnostic capacity and therapeutic manipulations, potentially helping in producing more effective vaccines or developing treatments for autoimmune diseases. The problem is also interesting conceptually as it is relevant to other fields where 3D movement of objects is tracked with a discrete scanning interval. A general immunology introduction is presented in chapter 1. In chapter 2, I apply quantitative methods to multi-photon imaging data to measure how T cells and DCs are spatially arranged in LNs. This has been previously studied to describe differences between the naive and immunised state and as an indicator of the magnitude of the immune response in LNs, but previous analyses have been generally descriptive. The quantitative analysis shows that some of the previous conclusions may have been premature. In chapter 3, I use Bayesian state-space models to test some hypotheses about the mode of T cell search for DCs. A two-state mode of movement where T cells can be classified as either interacting to a DC or freely migrating is supported over a model where T cells would home in on DCs at distance through for example the action of chemokines. In chapter 4, I study whether T cell migration is linked to the geometric structure of the fibroblast reticular network (FRC). I find support for the hypothesis that the movement is constrained to the fibroblast reticular cell (FRC) network over an alternative 'random walk with persistence time' model where cells would move randomly, with a short-term persistence driven by a hypothetical T cell intrinsic 'clock'. I also present unexpected results on the FRC network geometry. Finally, a quantitative method is presented for addressing some measurement biases inherent to multi-photon imaging. In all three chapters, novel findings are made, and the methods developed have the potential for further use to address important problems in the field. In chapter 5, I present a summary and synthesis of results from chapters 3-4 and a more speculative discussion of these results and potential future directions.
Resumo:
The objective of this study was to gain an understanding of the effects of population heterogeneity, missing data, and causal relationships on parameter estimates from statistical models when analyzing change in medication use. From a public health perspective, two timely topics were addressed: the use and effects of statins in populations in primary prevention of cardiovascular disease and polypharmacy in older population. Growth mixture models were applied to characterize the accumulation of cardiovascular and diabetes medications among apparently healthy population of statin initiators. The causal effect of statin adherence on the incidence of acute cardiovascular events was estimated using marginal structural models in comparison with discrete-time hazards models. The impact of missing data on the growth estimates of evolution of polypharmacy was examined comparing statistical models under different assumptions for missing data mechanism. The data came from Finnish administrative registers and from the population-based Geriatric Multidisciplinary Strategy for the Good Care of the Elderly study conducted in Kuopio, Finland, during 2004–07. Five distinct patterns of accumulating medications emerged among the population of apparently healthy statin initiators during two years after statin initiation. Proper accounting for time-varying dependencies between adherence to statins and confounders using marginal structural models produced comparable estimation results with those from a discrete-time hazards model. Missing data mechanism was shown to be a key component when estimating the evolution of polypharmacy among older persons. In conclusion, population heterogeneity, missing data and causal relationships are important aspects in longitudinal studies that associate with the study question and should be critically assessed when performing statistical analyses. Analyses should be supplemented with sensitivity analyses towards model assumptions.
Resumo:
Nowadays, risks arising from the rapid development of oil and gas industries are significantly increasing. As a result, one of the main concerns of either industrial or environmental managers is the identification and assessment of such risks in order to develop and maintain appropriate proactive measures. Oil spill from stationary sources in offshore zones is one of the accidents resulting in several adverse impacts on marine ecosystems. Considering a site's current situation and relevant requirements and standards, risk assessment process is not only capable of recognizing the probable causes of accidents but also of estimating the probability of occurrence and the severity of consequences. In this way, results of risk assessment would help managers and decision makers create and employ proper control methods. Most of the represented models for risk assessment of oil spills are achieved on the basis of accurate data bases and analysis of historical data, but unfortunately such data bases are not accessible in most of the zones, especially in developing countries, or else they are newly established and not applicable yet. This issue reveals the necessity of using Expert Systems and Fuzzy Set Theory. By using such systems it will be possible to formulize the specialty and experience of several experts and specialists who have been working in petroliferous areas for several years. On the other hand, in developing countries often the damages to environment and environmental resources are not considered as risk assessment priorities and they are approximately under-estimated. For this reason, the proposed model in this research is specially addressing the environmental risk of oil spills from stationary sources in offshore zones.