534 resultados para data centric storage
Resumo:
Key decisions at the collection, pre-processing, transformation, mining and interpretation phase of any knowledge discovery from database (KDD) process depend heavily on assumptions and theorectical perspectives relating to the type of task to be performed and characteristics of data sourced. In this article, we compare and contrast theoretical perspectives and assumptions taken in data mining exercises in the legal domain with those adopted in data mining in TCM and allopathic medicine. The juxtaposition results in insights for the application of KDD for Traditional Chinese Medicine.
Resumo:
Citizen Science projects are initiatives in which members of the general public participate in scientific research projects and perform or manage research-related tasks such as data collection and/or data annotation. Citizen Science is technologically possible and scientifically significant. However, as the gathered information is from the crowd, the data quality is always hard to manage. There are many ways to manage data quality, and reputation management is one of the common approaches. In recent year, many research teams have deployed many audio or image sensors in natural environment in order to monitor the status of animals or plants. The collected data will be analysed by ecologists. However, as the amount of collected data is exceedingly huge and the number of ecologists is very limited, it is impossible for scientists to manually analyse all these data. The functions of existing automated tools to process the data are still very limited and the results are still not very accurate. Therefore, researchers have turned to recruiting general citizens who are interested in helping scientific research to do the pre-processing tasks such as species tagging. Although research teams can save time and money by recruiting general citizens to volunteer their time and skills to help data analysis, the reliability of contributed data varies a lot. Therefore, this research aims to investigate techniques to enhance the reliability of data contributed by general citizens in scientific research projects especially for acoustic sensing projects. In particular, we aim to investigate how to use reputation management to enhance data reliability. Reputation systems have been used to solve the uncertainty and improve data quality in many marketing and E-Commerce domains. The commercial organizations which have chosen to embrace the reputation management and implement the technology have gained many benefits. Data quality issues are significant to the domain of Citizen Science due to the quantity and diversity of people and devices involved. However, research on reputation management in this area is relatively new. We therefore start our investigation by examining existing reputation systems in different domains. Then we design novel reputation management approaches for Citizen Science projects to categorise participants and data. We have investigated some critical elements which may influence data reliability in Citizen Science projects. These elements include personal information such as location and education and performance information such as the ability to recognise certain bird calls. The designed reputation framework is evaluated by a series of experiments involving many participants for collecting and interpreting data, in particular, environmental acoustic data. Our research in exploring the advantages of reputation management in Citizen Science (or crowdsourcing in general) will help increase awareness among organizations that are unacquainted with its potential benefits.
Resumo:
Health complaint statistics are important for identifying problems and bringing about improvements to health care provided by health service providers and to the wider health care system. This paper overviews complaints handling by the eight Australian state and territory health complaint entities, based on an analysis of data from their annual reports. The analysis shows considerable variation between jurisdictions in the ways complaint data are defined, collected and recorded. Complaints from the public are an important accountability mechanism and open a window on service quality. The lack of a national approach leads to fragmentation of complaint data and a lost opportunity to use national data to assist policy development and identify the main areas causing consumers to complain. We need a national approach to complaints data collection in order to better respond to patients’ concerns.
Resumo:
Background: HIV-1 Pr55gag virus-like particles (VLPs) expressed by baculovirus in insect cells are considered to be a very promising HIV-1 vaccine candidate, as they have been shown to elicit broad cellular immune responses when tested in animals, particularly when used as a boost to DNA or BCG vaccines. However, it is important for the VLPs to retain their structure for them to be fully functional and effective. The medium in which the VLPs are formulated and the temperature at which they are stored are two important factors affecting their stability. FINDINGS We describe the screening of 3 different readily available formulation media (sorbitol, sucrose and trehalose) for their ability to stabilise HIV-1 Pr55gag VLPs during prolonged storage. Transmission electron microscopy (TEM) was done on VLPs stored at two different concentrations of the media at three different temperatures (4[degree sign]C, --20[degree sign]C and -70[degree sign]C) over different time periods, and the appearance of the VLPs was compared. VLPs stored in 15% trehalose at -70[degree sign]C retained their original appearance the most effectively over a period of 12 months. VLPs stored in 5% trehalose, sorbitol or sucrose were not all intact even after 1 month storage at the temperatures tested. In addition, we showed that VLPs stored under these conditions were able to be frozen and re-thawed twice before showing changes in their appearance. Conclusions Although the inclusion of other analytical tools are essential to validate these preliminary findings, storage in 15% trehalose at -70[degree sign]C for 12 months is most effective in retaining VLP stability.
Resumo:
Background: National physical activity data suggest that there is a considerable difference in physical activity levels of US and Australian adults. Although different surveys (Active Australia and BRFSS) are used, the questions are similar. Different protocols, however, are used to estimate “activity” from the data collected. The primary aim of this study was to assess whether the 2 approaches to the management of PA data could explain some of the difference in prevalence estimates derived from the two national surveys. Methods: Secondary data analysis of the most recent AA survey (N = 2987). Results: 15% of the sample was defined as “active” using Australian criteria but as “inactive” using the BRFSS protocol, even though weekly energy expenditure was commensurate with meeting current guidelines. Younger respondents (age < 45 y) were more likely to be “misclassified” using the BRFSS criteria. Conclusions: The prevalence of activity in Australia and the US appears to be more similar than we had previously thought.
Resumo:
Client owners usually need an estimate or forecast of their likely building costs in advance of detailed design in order to confirm the financial feasibility of their projects. Because of their timing in the project life cycle, these early stage forecasts are characterized by the minimal amount of information available concerning the new (target) project to the point that often only its size and type are known. One approach is to use the mean contract sum of a sample, or base group, of previous projects of a similar type and size to the project for which the estimate is needed. Bernoulli’s law of large numbers implies that this base group should be as large as possible. However, increasing the size of the base group inevitably involves including projects that are less and less similar to the target project. Deciding on the optimal number of base group projects is known as the homogeneity or pooling problem. A method of solving the homogeneity problem is described involving the use of closed form equations to compare three different sampling arrangements of previous projects for their simulated forecasting ability by a cross-validation method, where a series of targets are extracted, with replacement, from the groups and compared with the mean value of the projects in the base groups. The procedure is then demonstrated with 450 Hong Kong projects (with different project types: Residential, Commercial centre, Car parking, Social community centre, School, Office, Hotel, Industrial, University and Hospital) clustered into base groups according to their type and size.
Resumo:
The selection of appropriate analogue materials is a central consideration in the design of realistic physical models. We investigate the rheology of highly-filled silicone polymers in order to find materials with a power-law strain-rate softening rheology suitable for modelling rock deformation by dislocation creep and report the rheological properties of the materials as functions of the filler content. The mixtures exhibit strain-rate softening behaviour but with increasing amounts of filler become strain-dependent. For the strain-independent viscous materials, flow laws are presented while for strain-dependent materials the relative importance of strain and strain rate softening/hardening is reported. If the stress or strain rate is above a threshold value some highly-filled silicone polymers may be considered linear visco-elastic (strain independent) and power-law strain-rate softening. The power-law exponent can be raised from 1 to ~3 by using mixtures of high-viscosity silicone and plasticine. However, the need for high shear strain rates to obtain the power-law rheology imposes some restrictions on the usage of such materials for geodynamic modelling. Two simple shear experiments are presented that use Newtonian and power-law strain-rate softening materials. The results demonstrate how materials with power-law rheology result in better strain localization in analogue experiments.
Resumo:
A significant issue encountered when fusing data received from multiple sensors is the accuracy of the timestamp associated with each piece of data. This is particularly important in applications such as Simultaneous Localisation and Mapping (SLAM) where vehicle velocity forms an important part of the mapping algorithms; on fastmoving vehicles, even millisecond inconsistencies in data timestamping can produce errors which need to be compensated for. The timestamping problem is compounded in a robot swarm environment due to the use of non-deterministic readily-available hardware (such as 802.11-based wireless) and inaccurate clock synchronisation protocols (such as Network Time Protocol (NTP)). As a result, the synchronisation of the clocks between robots can be out by tens-to-hundreds of milliseconds making correlation of data difficult and preventing the possibility of the units performing synchronised actions such as triggering cameras or intricate swarm manoeuvres. In this thesis, a complete data fusion unit is designed, implemented and tested. The unit, named BabelFuse, is able to accept sensor data from a number of low-speed communication buses (such as RS232, RS485 and CAN Bus) and also timestamp events that occur on General Purpose Input/Output (GPIO) pins referencing a submillisecondaccurate wirelessly-distributed "global" clock signal. In addition to its timestamping capabilities, it can also be used to trigger an attached camera at a predefined start time and frame rate. This functionality enables the creation of a wirelessly-synchronised distributed image acquisition system over a large geographic area; a real world application for this functionality is the creation of a platform to facilitate wirelessly-distributed 3D stereoscopic vision. A ‘best-practice’ design methodology is adopted within the project to ensure the final system operates according to its requirements. Initially, requirements are generated from which a high-level architecture is distilled. This architecture is then converted into a hardware specification and low-level design, which is then manufactured. The manufactured hardware is then verified to ensure it operates as designed and firmware and Linux Operating System (OS) drivers are written to provide the features and connectivity required of the system. Finally, integration testing is performed to ensure the unit functions as per its requirements. The BabelFuse System comprises of a single Grand Master unit which is responsible for maintaining the absolute value of the "global" clock. Slave nodes then determine their local clock o.set from that of the Grand Master via synchronisation events which occur multiple times per-second. The mechanism used for synchronising the clocks between the boards wirelessly makes use of specific hardware and a firmware protocol based on elements of the IEEE-1588 Precision Time Protocol (PTP). With the key requirement of the system being submillisecond-accurate clock synchronisation (as a basis for timestamping and camera triggering), automated testing is carried out to monitor the o.sets between each Slave and the Grand Master over time. A common strobe pulse is also sent to each unit for timestamping; the correlation between the timestamps of the di.erent units is used to validate the clock o.set results. Analysis of the automated test results show that the BabelFuse units are almost threemagnitudes more accurate than their requirement; clocks of the Slave and Grand Master units do not di.er by more than three microseconds over a running time of six hours and the mean clock o.set of Slaves to the Grand Master is less-than one microsecond. The common strobe pulse used to verify the clock o.set data yields a positive result with a maximum variation between units of less-than two microseconds and a mean value of less-than one microsecond. The camera triggering functionality is verified by connecting the trigger pulse output of each board to a four-channel digital oscilloscope and setting each unit to output a 100Hz periodic pulse with a common start time. The resulting waveform shows a maximum variation between the rising-edges of the pulses of approximately 39¥ìs, well below its target of 1ms.
Resumo:
In most visual mapping applications suited to Autonomous Underwater Vehicles (AUVs), stereo visual odometry (VO) is rarely utilised as a pose estimator as imagery is typically of very low framerate due to energy conservation and data storage requirements. This adversely affects the robustness of a vision-based pose estimator and its ability to generate a smooth trajectory. This paper presents a novel VO pipeline for low-overlap imagery from an AUV that utilises constrained motion and integrates magnetometer data in a bi-objective bundle adjustment stage to achieve low-drift pose estimates over large trajectories. We analyse the performance of a standard stereo VO algorithm and compare the results to the modified vo algorithm. Results are demonstrated in a virtual environment in addition to low-overlap imagery gathered from an AUV. The modified VO algorithm shows significantly improved pose accuracy and performance over trajectories of more than 300m. In addition, dense 3D meshes generated from the visual odometry pipeline are presented as a qualitative output of the solution.
Resumo:
Fruit drying is a process of removing moisture to preserve fruits by preventing microbial spoilage. It increases shelf life, reduce weight and volume thus minimize packing, storage, and transportation cost and enable storage of food under ambient environment. But, it is a complex process which involves combination of heat and mass transfer and physical property change and shrinkage of the material. In this background, the aim of this paper to develop a mathematical model to simulate coupled heat and mass transfer during convective drying of fruit. This model can be used predict the temperature and moisture distribution inside the fruits during drying. Two models were developed considering shrinkage dependent and temperature dependent moisture diffusivity and the results were compared. The governing equations of heat and mass transfer are solved and a parametric study has been done with Comsol Multiphysics 4.3. The predicted results were validated with experimental data.
Resumo:
Mandatory data breach notification laws are a novel statutory solution in relation to organizational protections of personal information. They require organizations which have suffered a breach of security involving personal information to notif'y those persons whose information may have been affected. These laws originated in the state based legislatures of the United States during the last decade and have subsequently garnered worldwide legislative interest. Despite their perceived utility, mandatory data breach notification laws have several conceptual and practical concems that limit the scope of their applicability, particularly in relation to existing information privacy law regimes. We outline these concerns, and in doing so, we contend that while mandatory data breach notification laws have many useful facets, their utility as an 'add-on' to enhance the failings of current information privacy law frameworks should not necessarily be taken for granted.
Resumo:
longitudinal study of data modelling across grades 1-3. The activity engaged children in designing, implementing, and analysing a survey about their new playground. Data modelling involves investigations of meaningful phenomena, deciding what is worthy of attention (identifying complex attributes), and then progressing to organising, structuring, visualising, and representing data. The core components of data modelling addressed here are children’s structuring and representing of data, with a focus on their display of metarepresentational competence (diSessa, 2004). Such competence includes students’ abilities to invent or design a variety of new representations, explain their creations, understand the role they play, and critique and compare the adequacy of representations. Reported here are the ways in which the children structured and represented their data, the metarepresentational competence displayed, and links between their metarepresentational competence and conceptual competence.
Resumo:
A mineralogical survey of chondritic interplanetary dust particles (IDPs)showed that these micrometeorites differ significantly in form and texture from components of carbonaceous chondrites and contain some mineral assemblages which do not occur in any meteorite class1. Models of chondritic IDP mineral evolution generally ignore the typical (ultra-) fine grain size of consituent minerals which range between 0.002-0.1µm in size2. The chondritic porous (CP) subset of chondritic IDPs is probably debris from short period comets although evidence for a cometary origin is still circumstantial3. If CP IDPs represent dust from regions of the Solar System in which comet accretion occurred, it can be argued that pervasive mineralogical evolution of IDP dust has been arrested due to cryogenic storage in comet nuclei. Thus, preservation in CP IDPs of "unusual meteorite minerals", such as oxides of tin, bismuth and titanium4, should not be dismissed casually. These minerals may contain specific information about processes that occurred in regions of the solar nebula, and early Solar System, which spawned the IDP parent bodies such as comets and C, P and D asteroids6. It is not fully appreciated that the apparent disparity between the mineralogy of CP IDPs and carbonaceous chondrite matrix may also be caused by the choice of electron-beam techniques with different analytical resolution. For example, Mg-Si-Fe distributions of Cl matrix obtained by "defocussed beam" microprobe analyses are displaced towards lower Fe-values when using analytical electron microscope (AEM)data which resolve individual mineral grains of various layer silicates and magnetite in the same matrix6,7. In general, "unusual meteorite minerals" in chondritic IDPs, such as metallic titanium, Tin01-n(Magneli phases) and anatase8 add to the mineral data base of fine-grained Solar System materials and provide constraints on processes that occurred in the early Solar System.
Resumo:
Background Cancer outlier profile analysis (COPA) has proven to be an effective approach to analyzing cancer expression data, leading to the discovery of the TMPRSS2 and ETS family gene fusion events in prostate cancer. However, the original COPA algorithm did not identify down-regulated outliers, and the currently available R package implementing the method is similarly restricted to the analysis of over-expressed outliers. Here we present a modified outlier detection method, mCOPA, which contains refinements to the outlier-detection algorithm, identifies both over- and under-expressed outliers, is freely available, and can be applied to any expression dataset. Results We compare our method to other feature-selection approaches, and demonstrate that mCOPA frequently selects more-informative features than do differential expression or variance-based feature selection approaches, and is able to recover observed clinical subtypes more consistently. We demonstrate the application of mCOPA to prostate cancer expression data, and explore the use of outliers in clustering, pathway analysis, and the identification of tumour suppressors. We analyse the under-expressed outliers to identify known and novel prostate cancer tumour suppressor genes, validating these against data in Oncomine and the Cancer Gene Index. We also demonstrate how a combination of outlier analysis and pathway analysis can identify molecular mechanisms disrupted in individual tumours. Conclusions We demonstrate that mCOPA offers advantages, compared to differential expression or variance, in selecting outlier features, and that the features so selected are better able to assign samples to clinically annotated subtypes. Further, we show that the biology explored by outlier analysis differs from that uncovered in differential expression or variance analysis. mCOPA is an important new tool for the exploration of cancer datasets and the discovery of new cancer subtypes, and can be combined with pathway and functional analysis approaches to discover mechanisms underpinning heterogeneity in cancers