875 resultados para Index reduction techniques
Resumo:
In the medical and healthcare arena, patients‟ data is not just their own personal history but also a valuable large dataset for finding solutions for diseases. While electronic medical records are becoming popular and are used in healthcare work places like hospitals, as well as insurance companies, and by major stakeholders such as physicians and their patients, the accessibility of such information should be dealt with in a way that preserves privacy and security. Thus, finding the best way to keep the data secure has become an important issue in the area of database security. Sensitive medical data should be encrypted in databases. There are many encryption/ decryption techniques and algorithms with regard to preserving privacy and security. Currently their performance is an important factor while the medical data is being managed in databases. Another important factor is that the stakeholders should decide more cost-effective ways to reduce the total cost of ownership. As an alternative, DAS (Data as Service) is a popular outsourcing model to satisfy the cost-effectiveness but it takes a consideration that the encryption/ decryption modules needs to be handled by trustworthy stakeholders. This research project is focusing on the query response times in a DAS model (AES-DAS) and analyses the comparison between the outsourcing model and the in-house model which incorporates Microsoft built-in encryption scheme in a SQL Server. This research project includes building a prototype of medical database schemas. There are 2 types of simulations to carry out the project. The first stage includes 6 databases in order to carry out simulations to measure the performance between plain-text, Microsoft built-in encryption and AES-DAS (Data as Service). Particularly, the AES-DAS incorporates implementations of symmetric key encryption such as AES (Advanced Encryption Standard) and a Bucket indexing processor using Bloom filter. The results are categorised such as character type, numeric type, range queries, range queries using Bucket Index and aggregate queries. The second stage takes the scalability test from 5K to 2560K records. The main result of these simulations is that particularly as an outsourcing model, AES-DAS using the Bucket index shows around 3.32 times faster than a normal AES-DAS under the 70 partitions and 10K record-sized databases. Retrieving Numeric typed data takes shorter time than Character typed data in AES-DAS. The aggregation query response time in AES-DAS is not as consistent as that in MS built-in encryption scheme. The scalability test shows that the DBMS reaches in a certain threshold; the query response time becomes rapidly slower. However, there is more to investigate in order to bring about other outcomes and to construct a secured EMR (Electronic Medical Record) more efficiently from these simulations.
Resumo:
The main factors affecting environmental sensitivity to degradation are soil, vegetation, climate and management, through either their intrinsic characteristics or by their interaction on the landscape. Different levels of degradation risks may be observed in response to particular combinations of the aforementioned factors. For instance, the combination of inappropriate management practices and intrinsically weak soil conditions will result in a severe degradation of the environment, while the combination of the same type of management with better soil conditions may lead to negligible degradation.The aim of this study was to identify factors and their impact on land degradation processes in three areas of the Basilicata region (southern Italy) using a procedure that couples environmental indices, GIS and crop-soil simulation models. Areas prone to desertification were first identified using the Environmental Sensitive Areas (ESA) procedure. An analysis for identifying the weight that each of the contributing factor (climate, soil, vegetation, management) had on the ESA was carried out using GIS techniques. The SALUS model was successfully executed to identify the management practices that could lead to better soil conditions to enhance land use sustainability. The best management practices were found to be those that minimized soil disturbance and increased soil organic carbon. Two alternative scenarios with improved soil quality and subsequently improving soil water holding capacity were used as mitigation measures. The ESA were recalculated and the effects of the mitigation measures suggested by the model were assessed. The new ESA showed a significant reduction on land degradation.
Resumo:
The research team recognized the value of network-level Falling Weight Deflectometer (FWD) testing to evaluate the structural condition trends of flexible pavements. However, practical limitations due to the cost of testing, traffic control and safety concerns and the ability to test a large network may discourage some agencies from conducting the network-level FWD testing. For this reason, the surrogate measure of the Structural Condition Index (SCI) is suggested for use. The main purpose of the research presented in this paper is to investigate data mining strategies and to develop a prediction method of the structural condition trends for network-level applications which does not require FWD testing. The research team first evaluated the existing and historical pavement condition, distress, ride, traffic and other data attributes in the Texas Department of Transportation (TxDOT) Pavement Maintenance Information System (PMIS), applied data mining strategies to the data, discovered useful patterns and knowledge for SCI value prediction, and finally provided a reasonable measure of pavement structural condition which is correlated to the SCI. To evaluate the performance of the developed prediction approach, a case study was conducted using the SCI data calculated from the FWD data collected on flexible pavements over a 5-year period (2005 – 09) from 354 PMIS sections representing 37 pavement sections on the Texas highway system. The preliminary study results showed that the proposed approach can be used as a supportive pavement structural index in the event when FWD deflection data is not available and help pavement managers identify the timing and appropriate treatment level of preventive maintenance activities.
Resumo:
This paper presents two novel concepts to enhance the accuracy of damage detection using the Modal Strain Energy based Damage Index (MSEDI) with the presence of noise in the mode shape data. Firstly, the paper presents a sequential curve fitting technique that reduces the effect of noise on the calculation process of the MSEDI, more effectively than the two commonly used curve fitting techniques; namely, polynomial and Fourier’s series. Secondly, a probability based Generalized Damage Localization Index (GDLI) is proposed as a viable improvement to the damage detection process. The study uses a validated ABAQUS finite-element model of a reinforced concrete beam to obtain mode shape data in the undamaged and damaged states. Noise is simulated by adding three levels of random noise (1%, 3%, and 5%) to the mode shape data. Results show that damage detection is enhanced with increased number of modes and samples used with the GDLI.
Resumo:
Prostate cancer (CaP) is the second leading cause of cancer-related deaths in North American males and the most common newly diagnosed cancer in men world wide. Biomarkers are widely used for both early detection and prognostic tests for cancer. The current, commonly used biomarker for CaP is serum prostate specific antigen (PSA). However, the specificity of this biomarker is low as its serum level is not only increased in CaP but also in various other diseases, with age and even body mass index. Human body fluids provide an excellent resource for the discovery of biomarkers, with the advantage over tissue/biopsy samples of their ease of access, due to the less invasive nature of collection. However, their analysis presents challenges in terms of variability and validation. Blood and urine are two human body fluids commonly used for CaP research, but their proteomic analyses are limited both by the large dynamic range of protein abundance making detection of low abundance proteins difficult and in the case of urine, by the high salt concentration. To overcome these challenges, different techniques for removal of high abundance proteins and enrichment of low abundance proteins are used. Their applications and limitations are discussed in this review. A number of innovative proteomic techniques have improved detection of biomarkers. They include two dimensional differential gel electrophoresis (2D-DIGE), quantitative mass spectrometry (MS) and functional proteomic studies, i.e., investigating the association of post translational modifications (PTMs) such as phosphorylation, glycosylation and protein degradation. The recent development of quantitative MS techniques such as stable isotope labeling with amino acids in cell culture (SILAC), isobaric tags for relative and absolute quantitation (iTRAQ) and multiple reaction monitoring (MRM) have allowed proteomic researchers to quantitatively compare data from different samples. 2D-DIGE has greatly improved the statistical power of classical 2D gel analysis by introducing an internal control. This chapter aims to review novel CaP biomarkers as well as to discuss current trends in biomarker research from two angles: the source of biomarkers (particularly human body fluids such as blood and urine), and emerging proteomic approaches for biomarker research.
Resumo:
Detailed representations of complex flow datasets are often difficult to generate using traditional vector visualisation techniques such as arrow plots and streamlines. This is particularly true when the flow regime changes in time. Texture-based techniques, which are based on the advection of dense textures, are novel techniques for visualising such flows. We review two popular texture based techniques and their application to flow datasets sourced from active research projects. The techniques investigated were Line integral convolution (LIC) [1], and Image based flow visualisation (IBFV) [18]. We evaluated these and report on their effectiveness from a visualisation perspective. We also report on their ease of implementation and computational overheads.
Resumo:
The structure of Cu-ZSM-5 catalysts that show activity for direct NO decomposition and selective catalytic reduction of NOx by hydrocarbons has been investigated by a multitude of modern surface analysis and spectroscopy techniques including X-ray photoelectron spectroscopy, thermogravimetric analysis, and in situ Fourier transform infrared spectroscopy. A series of four catalysts were prepared by exchange of Na-ZSM-5 with dilute copper acetate, and the copper loading was controlled by variation of the solution pH. Underexchanged catalysts contained isolated Cu2+OH-(H2O) species and as the copper loading was increased Cu2+ ions incorporated into the zeolite lattice appeared. The sites at which the latter two copper species were located were fundamentally different. The Cu2+OH-(H2O) moieties were bound to two lattice oxygen ions and associated with one aluminum framework species. In contrast, the Cu2+ ions were probably bound to four lattice oxygen ions and associated with two framework aluminum ions. Once the Cu-ZSM-5 samples attained high levels of exchange, the development of [Cu(μ-OH)2Cu]n2+OH-(H2O) species along with a small concentration of Cu(OH)2 was observed. On activation in helium to 500°C the Cu2+OH-(H2O) species transformed into Cu2+O- and Cu+ moieties, whereas the Cu2+ ions were apparently unaffected by this treatment (apart from the loss of ligated water molecules). Calcination of the precursors resulted in the formation of Cu2+O2- and a one-dimensional CuO species. Temperature-programmed desorption studies revealed that oxygen was removed from the latter two species at 407 and 575°C, respectively. © 1999 Academic Press.
Resumo:
CTAC2012 was the 16th biennial Computational Techniques and Applications Conference, and took place at Queensland University of Technology from 23 - 26 September, 2012. The ANZIAM Special Interest Group in Computational Techniques and Applications is responsible for the CTAC meetings, the first of which was held in 1981.
Resumo:
Transient hyperopic refractive shifts occur on a timescale of weeks in some patients after initiation of therapy for hyperglycemia, and are usually followed by recovery to the original refraction. Possible lenticular origin of these changes is considered in terms of a paraxial gradient index model. Assuming that the lens thickness and curvatures remain unchanged, as observed in practice, it appears possible to account for initial hyperopic refractive shifts of up to a few diopters by reduction in refractive index near the lens center and alteration in the rate of change between center and surface, so that most of the index change occurs closer to the lens surface. Restoration of the original refraction depends on further change in the refractive index distribution with more gradual changes in refractive index from the lens center to its surface. Modeling limitations are discussed.
Resumo:
Genomic DNA obtained from patient whole blood samples is a key element for genomic research. Advantages and disadvantages, in terms of time-efficiency, cost-effectiveness and laboratory requirements, of procedures available to isolate nucleic acids need to be considered before choosing any particular method. These characteristics have not been fully evaluated for some laboratory techniques, such as the salting out method for DNA extraction, which has been excluded from comparison in different studies published to date. We compared three different protocols (a traditional salting out method, a modified salting out method and a commercially available kit method) to determine the most cost-effective and time-efficient method to extract DNA. We extracted genomic DNA from whole blood samples obtained from breast cancer patient volunteers and compared the results of the product obtained in terms of quantity (concentration of DNA extracted and DNA obtained per ml of blood used) and quality (260/280 ratio and polymerase chain reaction product amplification) of the obtained yield. On average, all three methods showed no statistically significant differences between the final result, but when we accounted for time and cost derived for each method, they showed very significant differences. The modified salting out method resulted in a seven- and twofold reduction in cost compared to the commercial kit and traditional salting out method, respectively and reduced time from 3 days to 1 hour compared to the traditional salting out method. This highlights a modified salting out method as a suitable choice to be used in laboratories and research centres, particularly when dealing with a large number of samples.
Resumo:
According to a study conducted by the International Maritime organisation (IMO) shipping sector is responsible for 3.3% of the global Greenhouse Gas (GHG) emissions. The 1997 Kyoto Protocol calls upon states to pursue limitation or reduction of emissions of GHG from marine bunker fuels working through the IMO. In 2011, 14 years after the adoption of the Kyoto Protocol, the Marine Environment Protection Committee (MEPC) of the IMO has adopted mandatory energy efficiency measures for international shipping which can be treated as the first ever mandatory global GHG reduction instrument for an international industry. The MEPC approved an amendment of Annex VI of the 1973 International Convention for the Prevention of Pollution from Ships (MARPOL 73/78) to introduce a mandatory Energy Efficiency Design Index (EEDI) for new ships and the Ship Energy Efficiency Management Plan (SEEMP) for all ships. Considering the growth projections of human population and world trade the technical and operational measures may not be able to reduce the amount of GHG emissions from international shipping in a satisfactory level. Therefore, the IMO is considering to introduce market-based mechanisms that may serve two purposes including providing a fiscal incentive for the maritime industry to invest in more energy efficient manner and off-setting of growing ship emissions. Some leading developing countries already voiced their serious reservations on the newly adopted IMO regulations stating that by imposing the same obligation on all countries, irrespective of their economic status, this amendment has rejected the Principle of Common but Differentiated Responsibility (the CBDR Principle), which has always been the cornerstone of international climate change law discourses. They also claimed that negotiation for a market based mechanism should not be continued without a clear commitment from the developed counters for promotion of technical co-operation and transfer of technology relating to the improvement of energy efficiency of ships. Against this backdrop, this article explores the challenges for the developing counters in the implementation of already adopted technical and operational measures.
Resumo:
International shipping is responsible for about 2.7% of the global emissions of CO2. In the absence of proper action, emissions from the maritime sector may grow by 150% to 250% by 2050, in comparison with the level of emissions in 2007. Against this backdrop, the International Maritime Organisation has introduced a mandatory Energy Efficiency Design Index (EEDI) for new ships and the Ship Energy Efficiency Management Plan (SEEMP) for all ships. Some Asian countries have voiced serious reservations about the newly adopted IMO regulations. They have suggested that imposing the same obligations on all countries, irrespective of their economic status, is a serious departure from the Principle of Common but Differentiated Responsibility, which has always been the cornerstone of international climate change law discourse. Against this backdrop, this article presents a brief overview of the technical and operational measures from the perspective of Asian countries.
Resumo:
Plant tissue has a complex cellular structure which is an aggregate of individual cells bonded by middle lamella. During drying processes, plant tissue undergoes extreme deformations which are mainly driven by moisture removal and turgor loss. Numerical modelling of this problem becomes challenging when conventional grid-based modelling techniques such as Finite Element Methods (FEM) and Finite Difference Methods (FDM) have grid-based limitations. This work presents a meshfree approach to model and simulate the deformations of plant tissues during drying. This method demonstrates the fundamental capabilities of meshfree methods in handling extreme deformations of multiphase systems. A simplified 2D tissue model is developed by aggregating individual cells while accounting for the stiffness of the middle lamella. Each individual cell is simply treated as consisting of two main components: cell fluid and cell wall. The cell fluid is modelled using Smoothed Particle Hydrodynamics (SPH) and the cell wall is modelled using a Discrete Element Method (DEM). During drying, moisture removal is accounted for by reduction of cell fluid and wall mass, which causes local shrinkage of cells eventually leading to tissue scale shrinkage. The cellular deformations are quantified using several cellular geometrical parameters and a favourably good agreement is observed when compared to experiments on apple tissue. The model is also capable of visually replicating dry tissue structures. The proposed model can be used as a step in developing complex tissue models to simulate extreme deformations during drying.
Resumo:
This article elucidates and analyzes the fundamental underlying structure of the renormalization group (RG) approach as it applies to the solution of any differential equation involving multiple scales. The amplitude equation derived through the elimination of secular terms arising from a naive perturbation expansion of the solution to these equations by the RG approach is reduced to an algebraic equation which is expressed in terms of the Thiele semi-invariants or cumulants of the eliminant sequence { Zi } i=1 . Its use is illustrated through the solution of both linear and nonlinear perturbation problems and certain results from the literature are recovered as special cases. The fundamental structure that emerges from the application of the RG approach is not the amplitude equation but the aforementioned algebraic equation. © 2008 The American Physical Society.
Resumo:
This study reports the synthesis, characterization and application of nano zero-valent iron (nZVI). The nZVI was produced by a reduction method and compared with commercial available ZVI powder for Pb2+ removal from aqueous phase. Comparing with commercial ZVI, the laboratory made nZVI powder has a much higher specific surface area. XRD patterns have revealed zero valent iron phases in two ZVI materials. Different morphologies have been observed using SEM and TEM techniques. EDX spectrums revealed even distribution of Pb on surface after reaction. The XPS analysis has confirmed that immobilized lead was present in its zero-valent and bivalent forms. ‘Core-shell’ structure of prepared ZVI was revealed based on combination of XRD and XPS characterizations. In addition, comparing with Fluka ZVI, this lab made nZVI has much higher reactivity towards Pb2+ and within just 15 mins 99.9% removal can be reached. This synthesized nano ZVI material has shown great potential for heavy metal immobilization from waste water.