971 resultados para Data Compression
Resumo:
The capability of storing multi-bit information is one of the most important challenges in memory technologies. An ambipolar polymer which intrinsically has the ability to transport electrons and holes as a semiconducting layer provides an opportunity for the charge trapping layer to trap both electrons and holes efficiently. Here, we achieved large memory window and distinct multilevel data storage by utilizing the phenomena of ambipolar charge trapping mechanism. As fabricated flexible memory devices display five well-defined data levels with good endurance and retention properties showing potential application in printed electronics.
Resumo:
Realistic virtual models of leaf surfaces are important for a number of applications in the plant sciences, such as modelling agrichemical spray droplet movement and spreading on the surface. In this context, the virtual surfaces are required to be sufficiently smooth to facilitate the use of the mathematical equations that govern the motion of the droplet. While an effective approach is to apply discrete smoothing D2-spline algorithms to reconstruct the leaf surfaces from three-dimensional scanned data, difficulties arise when dealing with wheat leaves that tend to twist and bend. To overcome this topological difficulty, we develop a parameterisation technique that rotates and translates the original data, allowing the surface to be fitted using the discrete smoothing D2-spline methods in the new parameter space. Our algorithm uses finite element methods to represent the surface as a linear combination of compactly supported shape functions. Numerical results confirm that the parameterisation, along with the use of discrete smoothing D2-spline techniques, produces realistic virtual representations of wheat leaves.
Resumo:
This thesis investigates how Open Government Data (OGD) concepts and practices might be implemented in the State of Qatar to achieve more transparent, effective and accountable government. The thesis concludes with recommendations as to how Qatar, as a developing country, might enhance the accessibility and usability of its OGD and implement successful and sustainable OGD systems and practices.
Resumo:
Background Small RNA sequencing is commonly used to identify novel miRNAs and to determine their expression levels in plants. There are several miRNA identification tools for animals such as miRDeep, miRDeep2 and miRDeep*. miRDeep-P was developed to identify plant miRNA using miRDeep’s probabilistic model of miRNA biogenesis, but it depends on several third party tools and lacks a user-friendly interface. The objective of our miRPlant program is to predict novel plant miRNA, while providing a user-friendly interface with improved accuracy of prediction. Result We have developed a user-friendly plant miRNA prediction tool called miRPlant. We show using 16 plant miRNA datasets from four different plant species that miRPlant has at least a 10% improvement in accuracy compared to miRDeep-P, which is the most popular plant miRNA prediction tool. Furthermore, miRPlant uses a Graphical User Interface for data input and output, and identified miRNA are shown with all RNAseq reads in a hairpin diagram. Conclusions We have developed miRPlant which extends miRDeep* to various plant species by adopting suitable strategies to identify hairpin excision regions and hairpin structure filtering for plants. miRPlant does not require any third party tools such as mapping or RNA secondary structure prediction tools. miRPlant is also the first plant miRNA prediction tool that dynamically plots miRNA hairpin structure with small reads for identified novel miRNAs. This feature will enable biologists to visualize novel pre-miRNA structure and the location of small RNA reads relative to the hairpin. Moreover, miRPlant can be easily used by biologists with limited bioinformatics skills.
Resumo:
Live migration of multiple Virtual Machines (VMs) has become an integral management activity in data centers for power saving, load balancing and system maintenance. While state-of-the-art live migration techniques focus on the improvement of migration performance of an independent single VM, only a little has been investigated to the case of live migration of multiple interacting VMs. Live migration is mostly influenced by the network bandwidth and arbitrarily migrating a VM which has data inter-dependencies with other VMs may increase the bandwidth consumption and adversely affect the performances of subsequent migrations. In this paper, we propose a Random Key Genetic Algorithm (RKGA) that efficiently schedules the migration of a given set of VMs accounting both inter-VM dependency and data center communication network. The experimental results show that the RKGA can schedule the migration of multiple VMs with significantly shorter total migration time and total downtime compared to a heuristic algorithm.
Resumo:
Spatial data are now prevalent in a wide range of fields including environmental and health science. This has led to the development of a range of approaches for analysing patterns in these data. In this paper, we compare several Bayesian hierarchical models for analysing point-based data based on the discretization of the study region, resulting in grid-based spatial data. The approaches considered include two parametric models and a semiparametric model. We highlight the methodology and computation for each approach. Two simulation studies are undertaken to compare the performance of these models for various structures of simulated point-based data which resemble environmental data. A case study of a real dataset is also conducted to demonstrate a practical application of the modelling approaches. Goodness-of-fit statistics are computed to compare estimates of the intensity functions. The deviance information criterion is also considered as an alternative model evaluation criterion. The results suggest that the adaptive Gaussian Markov random field model performs well for highly sparse point-based data where there are large variations or clustering across the space; whereas the discretized log Gaussian Cox process produces good fit in dense and clustered point-based data. One should generally consider the nature and structure of the point-based data in order to choose the appropriate method in modelling a discretized spatial point-based data.
Resumo:
Road asset managers are seeking analysis of the whole road network to supplement statistical analyses of small subsets of homogeneous roadway. This study outlines the use of data mining capable of analyzing the wide range of situations found on the network, with a focus on the role of skid resistance in the cause of crashes. Results from the analyses show that on non-crash-prone roads with low crash rates, skid resistance contributes only in a minor way, whereas on high-crash roadways, skid resistance often contributes significantly in the calculation of the crash rate. The results provide evidence supporting a causal relationship between skid resistance and crashes and highlight the importance of the role of skid resistance in decision making in road asset management.
Resumo:
This chapter describes decentralized data fusion algorithms for a team of multiple autonomous platforms. Decentralized data fusion (DDF) provides a useful basis with which to build upon for cooperative information gathering tasks for robotic teams operating in outdoor environments. Through the DDF algorithms, each platform can maintain a consistent global solution from which decisions may then be made. Comparisons will be made between the implementation of DDF using two probabilistic representations. The first, Gaussian estimates and the second Gaussian mixtures are compared using a common data set. The overall system design is detailed, providing insight into the overall complexity of implementing a robust DDF system for use in information gathering tasks in outdoor UAV applications.
Resumo:
We explore the relationship between form and data as a design agenda and learning strategy for novice visual information designers. Our students are university seniors in digital, visual design but novices to information design, manipulation and interpretation. We describe design strategies developed to scaffold sophisticated aesthetic and conceptual engagement despite limited understanding of the domain of designing with information. These revolve around an open-ended design project where students created a physical design from data of their choosing and research. The accompanying learning strategies concern this relationship between data and form to investigate it materially, formally and through ideation. Exemplifying student works that cross media and design domains are described.
Resumo:
A nonlinear interface element modelling method is formulated for the prediction of deformation and failure of high adhesive thin layer polymer mortared masonry exhibiting failure of units and mortar. Plastic flow vectors are explicitly integrated within the implicit finite element framework instead of relying on predictor–corrector like approaches. The method is calibrated using experimental data from uniaxial compression, shear triplet and flexural beam tests. The model is validated using a thin layer mortared masonry shear wall, whose experimental datasets are reported in the literature and is used to examine the behaviour of thin layer mortared masonry under biaxial loading.
Resumo:
Current design rules for the member capacities of cold-formed steel columns are based on the same non-dimensional strength curve for both fixed and pinned-ended columns at ambient temperature. This research has investigated the accuracy of using current ambient temperature design rules in Australia/New Zealand (AS/NZS 4600), American (AISI S100) and European (Eurocode 3 Part 1.3) standards in determining the flexural–torsional buckling capacities of cold-formed steel columns at uniform elevated temperatures using appropriately reduced mechanical properties. It was found that these design rules accurately predicted the member capacities of pin ended lipped channel columns undergoing flexural torsional buckling at elevated temperatures. However, for fixed ended columns with warping fixity undergoing flexural–torsional buckling, the current design rules significantly underestimated the column capacities as they disregard the beneficial effect of warping fixity. This paper has therefore recommended the use of improved design rules developed for ambient temperature conditions to predict the axial compression capacities of fixed ended columns subject to flexural–torsional buckling at elevated temperatures within AS/NZS 4600 and AISI S100 design provisions. The accuracy of the proposed fire design rules was verified using finite element analysis and test results of cold-formed lipped channel columns at elevated temperatures except for low strength steel columns with intermediate slenderness whose behaviour was influenced by the increased nonlinearity in the stress–strain curves at elevated temperatures. Further research is required to include these effects within AS/NZS 4600 and AISI S100 design rules. However, Eurocode 3 Part 1.3 design rules can be used for this purpose by using suitable buckling curves as recommended in this paper.
Resumo:
In recent years, increasing focus has been made on making good business decisions utilizing the product of data analysis. With the advent of the Big Data phenomenon, this is even more apparent than ever before. But the question is how can organizations trust decisions made on the basis of results obtained from analysis of untrusted data? Assurances and trust that data and datasets that inform these decisions have not been tainted by outside agency. This study will propose enabling the authentication of datasets specifically by the extension of the RESTful architectural scheme to include authentication parameters while operating within a larger holistic security framework architecture or model compliant to legislation.