859 resultados para HÁBEAS DATA


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This poster presents key features of how QUT’s integrated research data storage and management services work with researchers through their own individual or team research life cycle. By understanding the characteristics of research data, and the long-term need to store this data, QUT has provided resources and tools that support QUT’s goal of being a research intensive institute. Key to successful delivery and operation has been the focus upon researchers’ individual needs and the collaboration between providers, in particular, Information Technology Services, High Performance Computing and Research Support, and QUT Library. QUT’s Research Data Storage service provides all QUT researchers (staff and Higher Degree Research students (HDRs)) with a secure data repository throughout the research data lifecycle. Three distinct storage areas provide for raw research data to be acquired, project data to be worked on, and published data to be archived. Since the service was launched in late 2014, it has provided research project teams from all QUT faculties with acquisition, working or archival data space. Feedback indicates that the storage suits the unique needs of researchers and their data. As part of the workflow to establish storage space for researchers, Research Support Specialists and Research Data Librarians consult with researchers and HDRs to identify data storage requirements for projects and individual researchers, and to select and implement the most suitable data storage services and facilities. While research can be a journey into the unknown[1], a plan can help navigate through the uncertainty. Intertwined in the storage provision is QUT’s Research Data Management Planning tool. Launched in March 2015, it has already attracted 273 QUT staff and 352 HDR student registrations, and over 620 plans have been created (2/10/2015). Developed in collaboration with Office of Research Ethics and Integrity (OREI), uptake of the plan has exceeded expectations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bird species richness survey is one of the most intriguing ecological topics for evaluating environmental health. Here, bird species richness denotes the number of unique bird species in a particular area. Factors affecting the investigation of bird species richness include weather, observation bias, and most importantly, the prohibitive costs of conducting surveys at large spatiotemporal scales. Thanks to advances in recording techniques, these problems have been alleviated by deploying sensors for acoustic data collection. Although automated detection techniques have been introduced to identify various bird species, the innate complexity of bird vocalizations, the background noise present in the recording and the escalating volumes of acoustic data pose a challenging task on determination of bird species richness. In this paper we proposed a two-step computer-assisted sampling approach for determining bird species richness in one-day acoustic data. First, a classification model is built based on acoustic indices for filtering out minutes that contain few bird species. Then the classified bird minutes are ordered by an acoustic index and the redundant temporal minutes are removed from the ranked minute sequence. The experimental results show that our method is more efficient in directing experts for determination of bird species compared with the previous methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article describes a maximum likelihood method for estimating the parameters of the standard square-root stochastic volatility model and a variant of the model that includes jumps in equity prices. The model is fitted to data on the S&P 500 Index and the prices of vanilla options written on the index, for the period 1990 to 2011. The method is able to estimate both the parameters of the physical measure (associated with the index) and the parameters of the risk-neutral measure (associated with the options), including the volatility and jump risk premia. The estimation is implemented using a particle filter whose efficacy is demonstrated under simulation. The computational load of this estimation method, which previously has been prohibitive, is managed by the effective use of parallel computing using graphics processing units (GPUs). The empirical results indicate that the parameters of the models are reliably estimated and consistent with values reported in previous work. In particular, both the volatility risk premium and the jump risk premium are found to be significant.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Big Data and predictive analytics have received significant attention from the media and academic literature throughout the past few years, and it is likely that these emerging technologies will materially impact the mining sector. This short communication argues, however, that these technological forces will probably unfold differently in the mining industry than they have in many other sectors because of significant differences in the marginal cost of data capture and storage. To this end, we offer a brief overview of what Big Data and predictive analytics are, and explain how they are bringing about changes in a broad range of sectors. We discuss the “N=all” approach to data collection being promoted by many consultants and technology vendors in the marketplace but, by considering the economic and technical realities of data acquisition and storage, we then explain why a “n « all” data collection strategy probably makes more sense for the mining sector. Finally, towards shaping the industry’s policies with regards to technology-related investments in this area, we conclude by putting forward a conceptual model for leveraging Big Data tools and analytical techniques that is a more appropriate fit for the mining sector.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The explosive growth in the development of Traditional Chinese Medicine (TCM) has resulted in the continued increase in clinical and research data. The lack of standardised terminology, flaws in data quality planning and management of TCM informatics are preventing clinical decision-making, drug discovery and education. This paper argues that the introduction of data warehousing technologies to enhance the effectiveness and durability in TCM is paramount. To showcase the role of data warehousing in the improvement of TCM, this paper presents a practical model for data warehousing with detailed explanation, which is based on the structured electronic records, for TCM clinical researches and medical knowledge discovery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cancer is the leading contributor to the disease burden in Australia. This thesis develops and applies Bayesian hierarchical models to facilitate an investigation of the spatial and temporal associations for cancer diagnosis and survival among Queenslanders. The key objectives are to document and quantify the importance of spatial inequalities, explore factors influencing these inequalities, and investigate how spatial inequalities change over time. Existing Bayesian hierarchical models are refined, new models and methods developed, and tangible benefits obtained for cancer patients in Queensland. The versatility of using Bayesian models in cancer control are clearly demonstrated through these detailed and comprehensive analyses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over recent years, the focus in road safety has shifted towards a greater understanding of road crash serious injuries in addition to fatalities. Police reported crash data are often the primary source of crash information; however, the definition of serious injury within these data is not consistent across jurisdictions and may not be accurately operationalised. This study examined the linkage of police-reported road crash data with hospital data to explore the potential for linked data to enhance the quantification of serious injury. Data from the Queensland Road Crash Database (QRCD), the Queensland Hospital Admitted Patients Data Collection (QHAPDC), Emergency Department Information System (EDIS), and the Queensland Injury Surveillance Unit (QISU) for the year 2009 were linked. Nine different estimates of serious road crash injury were produced. Results showed that there was a large amount of variation in the estimates of the number and profile of serious road crash injuries depending on the definition or measure used. The results also showed that as the definition of serious injury becomes more precise the vulnerable road users become more prominent. These results have major implications in terms of how serious injuries are identified for reporting purposes. Depending on the definitions used, the calculation of cost and understanding of the impact of serious injuries would vary greatly. This study has shown how data linkage can be used to investigate issues of data quality. It has also demonstrated the potential improvements to the understanding of the road safety problem, particularly serious injury, by conducting data linkage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data generated via user activity on social media platforms is routinely used for research across a wide range of social sciences and humanities disciplines. The availability of data through the Twitter APIs in particular has afforded new modes of research, including in media and communication studies; however, there are practical and political issues with gaining access to such data, and with the consequences of how that access is controlled. In their paper ‘Easy Data, Hard Data’, Burgess and Bruns (2015) discuss both the practical and political aspects of Twitter data as they relate to academic research, describing how communication research has been enabled, shaped and constrained by Twitter’s “regimes of access” to data, the politics of data use, and emerging economies of data exchange. This conceptual model, including the ‘easy data, hard data’ formulation, can also be applied to Sina Weibo. In this paper, we build on this model to explore the practical and political challenges and opportunities associated with the ‘regimes of access’ to Weibo data, and their consequences for digital media and communication studies. We argue that in the Chinese context, the politics of data access can be even more complicated than in the case of Twitter, which makes scientific research relying on large social data from this platform more challenging in some ways, but potentially richer and more rewarding in others.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction Two symposia on “cardiovascular diseases and vulnerable plaques” Cardiovascular disease (CVD) is the leading cause of death worldwide. Huge effort has been made in many disciplines including medical imaging, computational modeling, bio- mechanics, bioengineering, medical devices, animal and clinical studies, population studies as well as genomic, molecular, cellular and organ-level studies seeking improved methods for early detection, diagnosis, prevention and treatment of these diseases [1-14]. However, the mechanisms governing the initiation, progression and the occurrence of final acute clinical CVD events are still poorly understood. A large number of victims of these dis- eases who are apparently healthy die suddenly without prior symptoms. Available screening and diagnostic methods are insufficient to identify the victims before the event occurs [8,9]. Most cardiovascular diseases are associated with vulnerable plaques. A grand challenge here is to develop new imaging techniques, predictive methods and patient screening tools to identify vulnerable plaques and patients who are more vulnerable to plaque rupture and associated clinical events such as stroke and heart attack, and recommend proper treatment plans to prevent those clinical events from happening. Articles in this special issue came from two symposia held recently focusing on “Cardio-vascular Diseases and Vulnerable Plaques: Data, Modeling, Predictions and Clinical Applications.” One was held at Worcester Polytechnic Institute (WPI), Worcester, MA, USA, July 13-14, 2014, right after the 7th World Congress of Biomechanics. This symposium was endorsed by the World Council of Biomechanics, and partially supported by a grant from NIH-National Institute of Biomedical Image and Bioengineering. The other was held at Southeast University (SEU), Nanjing, China, April 18-20, 2014.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

- Objective To explore the potential for using a basic text search of routine emergency department data to identify product-related injury in infants and to compare the patterns from routine ED data and specialised injury surveillance data. - Methods Data was sourced from the Emergency Department Information System (EDIS) and the Queensland Injury Surveillance Unit (QISU) for all injured infants between 2009 and 2011. A basic text search was developed to identify the top five infant products in QISU. Sensitivity, specificity, and positive predictive value were calculated and a refined search was used with EDIS. Results were manually reviewed to assess validity. Descriptive analysis was conducted to examine patterns between datasets. - Results The basic text search for all products showed high sensitivity and specificity, and most searches showed high positive predictive value. EDIS patterns were similar to QISU patterns with strikingly similar month-of-age injury peaks, admission proportions and types of injuries. - Conclusions This study demonstrated a capacity to identify a sample of valid cases of product-related injuries for specified products using simple text searching of routine ED data. - Implications As the capacity for large datasets grows and the capability to reliably mine text improves, opportunities for expanded sources of injury surveillance data increase. This will ultimately assist stakeholders such as consumer product safety regulators and child safety advocates to appropriately target prevention initiatives.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Deriving an estimate of optimal fishing effort or even an approximate estimate is very valuable for managing fisheries with multiple target species. The most challenging task associated with this is allocating effort to individual species when only the total effort is recorded. Spatial information on the distribution of each species within a fishery can be used to justify the allocations, but often such information is not available. To determine the long-term overall effort required to achieve maximum sustainable yield (MSY) and maximum economic yield (MEY), we consider three methods for allocating effort: (i) optimal allocation, which optimally allocates effort among target species; (ii) fixed proportions, which chooses proportions based on past catch data; and (iii) economic allocation, which splits effort based on the expected catch value of each species. Determining the overall fishing effort required to achieve these management objectives is a maximizing problem subject to constraints due to economic and social considerations. We illustrated the approaches using a case study of the Moreton Bay Prawn Trawl Fishery in Queensland (Australia). The results were consistent across the three methods. Importantly, our analysis demonstrated the optimal total effort was very sensitive to daily fishing costs-the effort ranged from 9500-11 500 to 6000-7000, 4000 and 2500 boat-days, using daily cost estimates of $0, $500, $750, and $950, respectively. The zero daily cost corresponds to the MSY, while a daily cost of $750 most closely represents the actual present fishing cost. Given the recent debate on which costs should be factored into the analyses for deriving MEY, our findings highlight the importance of including an appropriate cost function for practical management advice. The approaches developed here could be applied to other multispecies fisheries where only aggregated fishing effort data are recorded, as the literature on this type of modelling is sparse.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A smoothed rank-based procedure is developed for the accelerated failure time model to overcome computational issues. The proposed estimator is based on an EM-type procedure coupled with the induced smoothing. "The proposed iterative approach converges provided the initial value is based on a consistent estimator, and the limiting covariance matrix can be obtained from a sandwich-type formula. The consistency and asymptotic normality of the proposed estimator are also established. Extensive simulations show that the new estimator is not only computationally less demanding but also more reliable than the other existing estimators.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Selection criteria and misspecification tests for the intra-cluster correlation structure (ICS) in longitudinal data analysis are considered. In particular, the asymptotical distribution of the correlation information criterion (CIC) is derived and a new method for selecting a working ICS is proposed by standardizing the selection criterion as the p-value. The CIC test is found to be powerful in detecting misspecification of the working ICS structures, while with respect to the working ICS selection, the standardized CIC test is also shown to have satisfactory performance. Some simulation studies and applications to two real longitudinal datasets are made to illustrate how these criteria and tests might be useful.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We derive a new method for determining size-transition matrices (STMs) that eliminates probabilities of negative growth and accounts for individual variability. STMs are an important part of size-structured models, which are used in the stock assessment of aquatic species. The elements of STMs represent the probability of growth from one size class to another, given a time step. The growth increment over this time step can be modelled with a variety of methods, but when a population construct is assumed for the underlying growth model, the resulting STM may contain entries that predict negative growth. To solve this problem, we use a maximum likelihood method that incorporates individual variability in the asymptotic length, relative age at tagging, and measurement error to obtain von Bertalanffy growth model parameter estimates. The statistical moments for the future length given an individual's previous length measurement and time at liberty are then derived. We moment match the true conditional distributions with skewed-normal distributions and use these to accurately estimate the elements of the STMs. The method is investigated with simulated tag-recapture data and tag-recapture data gathered from the Australian eastern king prawn (Melicertus plebejus).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The contemporary methodology for growth models of organisms is based on continuous trajectories and thus it hinders us from modelling stepwise growth in crustacean populations. Growth models for fish are normally assumed to follow a continuous function, but a different type of model is needed for crustacean growth. Crustaceans must moult in order for them to grow. The growth of crustaceans is a discontinuous process due to the periodical shedding of the exoskeleton in moulting. The stepwise growth of crustaceans through the moulting process makes the growth estimation more complex. Stochastic approaches can be used to model discontinuous growth or what are commonly known as "jumps" (Figure 1). However, in stochastic growth model we need to ensure that the stochastic growth model results in only positive jumps. In view of this, we will introduce a subordinator that is a special case of a Levy process. A subordinator is a non-decreasing Levy process, that will assist in modelling crustacean growth for better understanding of the individual variability and stochasticity in moulting periods and increments. We develop the estimation methods for parameter estimation and illustrate them with the help of a dataset from laboratory experiments. The motivational dataset is from the ornate rock lobster, Panulirus ornatus, which can be found between Australia and Papua New Guinea. Due to the presence of sex effects on the growth (Munday et al., 2004), we estimate the growth parameters separately for each sex. Since all hard parts are shed too often, the exact age determination of a lobster can be challenging. However, the growth parameters for the aforementioned moult processes from tank data being able to estimate through: (i) inter-moult periods, and (ii) moult increment. We will attempt to derive a joint density, which is made up of two functions: one for moult increments and the other for time intervals between moults. We claim these functions are conditionally independent given pre-moult length and the inter-moult periods. The variables moult increments and inter-moult periods are said to be independent because of the Markov property or conditional probability. Hence, the parameters in each function can be estimated separately. Subsequently, we integrate both of the functions through a Monte Carlo method. We can therefore obtain a population mean for crustacean growth (e. g. red curve in Figure 1). [GRAPHICS]