963 resultados para Engineering, Industrial|Engineering, System Science|Operations Research


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A system for temporal data mining includes a computer readable medium having an application configured to receive at an input module a temporal data series having events with start times and end times, a set of allowed dwelling times and a threshold frequency. The system is further configured to identify, using a candidate identification and tracking module, one or more occurrences in the temporal data series of a candidate episode and increment a count for each identified occurrence. The system is also configured to produce at an output module an output for those episodes whose count of occurrences results in a frequency exceeding the threshold frequency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a GPU implementation of normalized cuts for road extraction problem using panchromatic satellite imagery. The roads have been extracted in three stages namely pre-processing, image segmentation and post-processing. Initially, the image is pre-processed to improve the tolerance by reducing the clutter (that mostly represents the buildings, vegetation,. and fallow regions). The road regions are then extracted using the normalized cuts algorithm. Normalized cuts algorithm is a graph-based partitioning `approach whose focus lies in extracting the global impression (perceptual grouping) of an image rather than local features. For the segmented image, post-processing is carried out using morphological operations - erosion and dilation. Finally, the road extracted image is overlaid on the original image. Here, a GPGPU (General Purpose Graphical Processing Unit) approach has been adopted to implement the same algorithm on the GPU for fast processing. A performance comparison of this proposed GPU implementation of normalized cuts algorithm with the earlier algorithm (CPU implementation) is presented. From the results, we conclude that the computational improvement in terms of time as the size of image increases for the proposed GPU implementation of normalized cuts. Also, a qualitative and quantitative assessment of the segmentation results has been projected.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of determination of system reliability of randomly vibrating structures arises in many application areas of engineering. We discuss in this paper approaches based on Monte Carlo simulations and laboratory testing to tackle problems of time variant system reliability estimation. The strategy we adopt is based on the application of Girsanov's transformation to the governing stochastic differential equations which enables estimation of probability of failure with significantly reduced number of samples than what is needed in a direct simulation study. Notably, we show that the ideas from Girsanov's transformation based Monte Carlo simulations can be extended to conduct laboratory testing to assess system reliability of engineering structures with reduced number of samples and hence with reduced testing times. Illustrative examples include computational studies on a 10 degree of freedom nonlinear system model and laboratory/computational investigations on road load response of an automotive system tested on a four post Lest rig. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Atualmente existem diferentes ferramentas computacionais para auxílio nos estudos de coordenação da proteção, que permitem traçar as curvas dos relés, de acordo com os parâmetros escolhidos pelos projetistas. Entretanto, o processo de escolha das curvas consideradas aceitáveis, com um elevado número de possibilidades e variáveis envolvidas, além de complexo, requer simplificações e iterações do tipo tentativa e erro. Neste processo, são fatores fundamentais tanto a experiência e o conhecimento do especialista, quanto um árduo trabalho, sendo que a coordenação da proteção é qualificada pela IEEE Std. 242 como sendo mais uma arte do que uma ciência. Este trabalho apresenta o desenvolvimento de um algoritmo genético e de um algoritmo inspirado em otimização por colônia de formigas, para automatizar e otimizar a coordenação da função de sobrecorrente de fase de relés digitais microprocessados (IEDs), em subestações industriais. Seis estudos de caso, obtidos a partir de um modelo de banco de dados, baseado em um sistema elétrico industrial real, são avaliados. Os algoritmos desenvolvidos geraram, em todos os estudos de caso, curvas coordenadas, atendendo a todas as restrições previamente estabelecidas e as diferenças temporais de atuação dos relés, no valor de corrente de curto circuito trifásica, apresentaram-se muito próximas do estabelecido como ótimo. As ferramentas desenvolvidas demonstraram potencialidade quando aplicadas nos estudos de coordenação da proteção, tendo resultados positivos na melhoria da segurança das instalações, das pessoas, da continuidade do processo e do impedimento de emissões prejudiciais ao meio ambiente.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A fundamental understanding of the information carrying capacity of optical channels requires the signal and physical channel to be modeled quantum mechanically. This thesis considers the problems of distributing multi-party quantum entanglement to distant users in a quantum communication system and determining the ability of quantum optical channels to reliably transmit information. A recent proposal for a quantum communication architecture that realizes long-distance, high-fidelity qubit teleportation is reviewed. Previous work on this communication architecture is extended in two primary ways. First, models are developed for assessing the effects of amplitude, phase, and frequency errors in the entanglement source of polarization-entangled photons, as well as fiber loss and imperfect polarization restoration, on the throughput and fidelity of the system. Second, an error model is derived for an extension of this communication architecture that allows for the production and storage of three-party entangled Greenberger-Horne-Zeilinger states. A performance analysis of the quantum communication architecture in qubit teleportation and quantum secret sharing communication protocols is presented. Recent work on determining the channel capacity of optical channels is extended in several ways. Classical capacity is derived for a class of Gaussian Bosonic channels representing the quantum version of classical colored Gaussian-noise channels. The proof is strongly mo- tivated by the standard technique of whitening Gaussian noise used in classical information theory. Minimum output entropy problems related to these channel capacity derivations are also studied. These single-user Bosonic capacity results are extended to a multi-user scenario by deriving capacity regions for single-mode and wideband coherent-state multiple access channels. An even larger capacity region is obtained when the transmitters use non- classical Gaussian states, and an outer bound on the ultimate capacity region is presented

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conjugative plasmids play a vital role in bacterial adaptation through horizontal gene transfer. Explaining how plasmids persist in host populations however is difficult, given the high costs often associated with plasmid carriage. Compensatory evolution to ameliorate this cost can rescue plasmids from extinction. In a recently published study we showed that compensatory evolution repeatedly targeted the same bacterial regulatory system, GacA/GacS, in populations of plasmid-carrying bacteria evolving across a range of selective environments. Mutations in these genes arose rapidly and completely eliminated the cost of plasmid carriage. Here we extend our analysis using an individual based model to explore the dynamics of compensatory evolution in this system. We show that mutations which ameliorate the cost of plasmid carriage can prevent both the loss of plasmids from the population and the fixation of accessory traits on the bacterial chromosome. We discuss how dependent the outcome of compensatory evolution is on the strength and availability of such mutations and the rate at which beneficial accessory traits integrate on the host chromosome.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the rapid growth of the Internet and digital communications, the volume of sensitive electronic transactions being transferred and stored over and on insecure media has increased dramatically in recent years. The growing demand for cryptographic systems to secure this data, across a multitude of platforms, ranging from large servers to small mobile devices and smart cards, has necessitated research into low cost, flexible and secure solutions. As constraints on architectures such as area, speed and power become key factors in choosing a cryptosystem, methods for speeding up the development and evaluation process are necessary. This thesis investigates flexible hardware architectures for the main components of a cryptographic system. Dedicated hardware accelerators can provide significant performance improvements when compared to implementations on general purpose processors. Each of the designs proposed are analysed in terms of speed, area, power, energy and efficiency. Field Programmable Gate Arrays (FPGAs) are chosen as the development platform due to their fast development time and reconfigurable nature. Firstly, a reconfigurable architecture for performing elliptic curve point scalar multiplication on an FPGA is presented. Elliptic curve cryptography is one such method to secure data, offering similar security levels to traditional systems, such as RSA, but with smaller key sizes, translating into lower memory and bandwidth requirements. The architecture is implemented using different underlying algorithms and coordinates for dedicated Double-and-Add algorithms, twisted Edwards algorithms and SPA secure algorithms, and its power consumption and energy on an FPGA measured. Hardware implementation results for these new algorithms are compared against their software counterparts and the best choices for minimum area-time and area-energy circuits are then identified and examined for larger key and field sizes. Secondly, implementation methods for another component of a cryptographic system, namely hash functions, developed in the recently concluded SHA-3 hash competition are presented. Various designs from the three rounds of the NIST run competition are implemented on FPGA along with an interface to allow fair comparison of the different hash functions when operating in a standardised and constrained environment. Different methods of implementation for the designs and their subsequent performance is examined in terms of throughput, area and energy costs using various constraint metrics. Comparing many different implementation methods and algorithms is nontrivial. Another aim of this thesis is the development of generic interfaces used both to reduce implementation and test time and also to enable fair baseline comparisons of different algorithms when operating in a standardised and constrained environment. Finally, a hardware-software co-design cryptographic architecture is presented. This architecture is capable of supporting multiple types of cryptographic algorithms and is described through an application for performing public key cryptography, namely the Elliptic Curve Digital Signature Algorithm (ECDSA). This architecture makes use of the elliptic curve architecture and the hash functions described previously. These components, along with a random number generator, provide hardware acceleration for a Microblaze based cryptographic system. The trade-off in terms of performance for flexibility is discussed using dedicated software, and hardware-software co-design implementations of the elliptic curve point scalar multiplication block. Results are then presented in terms of the overall cryptographic system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Anaerobic digestion (AD) of biodegradable waste is an environmentally and economically sustainable solution which incorporates waste treatment and energy recovery. The organic fraction of municipal solid waste (OFMSW), which comprises mostly of food waste, is highly degradable under anaerobic conditions. Biogas produced from OFMSW, when upgraded to biomethane, is recognised as one of the most sustainable renewable biofuels and can also be one of the cheapest sources of biomethane if a gate fee is associated with the substrate. OFMSW is a complex and heterogeneous material which may have widely different characteristics depending on the source of origin and collection system used. The research presented in this thesis investigates the potential energy resource from a wide range of organic waste streams through field and laboratory research on real world samples. OFMSW samples collected from a range of sources generated methane yields ranging from 75 to 160 m3 per tonne. Higher methane yields are associated with source segregated food waste from commercial catering premises as opposed to domestic sources. The inclusion of garden waste reduces the specific methane yield from household organic waste. In continuous AD trials it was found that a conventional continuously stirred tank reactor (CSTR) gave the highest specific methane yields at a moderate organic loading rate of 2 kg volatile solids (VS) m-3 digester day-1 and a hydraulic retention time of 30 days. The average specific methane yield obtained at this loading rate in continuous digestion was 560 ± 29 L CH4 kg-1 VS which exceeded the biomethane potential test result by 5%. The low carbon to nitrogen ratio (C: N <14:1) associated with canteen food waste lead to increasing concentrations of volatile fatty acids in line with high concentrations of ammonia nitrogen at higher organic loading rates. At an organic loading rate of 4 kg VS m-3day-1 the specific methane yield dropped considerably (381 L CH4 kg-1 VS), the pH rose to 8.1 and free ammonia (NH3 ) concentrations reached toxicity levels towards the end of the trial (ca. 950 mg L-1). A novel two phase AD reactor configuration consisting of a series of sequentially fed leach bed reactors connected to an upflow anaerobic sludge blanket (UASB) demonstrated a high rate of organic matter decay but resulted in lower specific methane yields (384 L CH4 kg-1 VS) than the conventional CSTR system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new science curriculum was introduced to primary schools in the Republic of Ireland in 2003. This curriculum, broader in scope than its 1971 predecessor (Curaclam na Bunscoile, 1971), requires teachers at all levels of primary school to teach science. A review carried out in 2008 of children’s experiences of this curriculum found that its implementation throughout the country was uneven. This finding, together with the increasing numbers of teachers who were requesting support to implement this curriculum, suggested the need for a review of Irish primary teachers’ needs in the area of science. The research study described in this thesis was undertaken to establish the extent of Irish primary teachers’ needs in the area of science by conducting a national survey. The data from this survey, together with data from international studies, were used to develop a theoretical framework for a model of Continuing Professional Development (CPD). This theoretical framework was used to design the Whole- School, In-School (WSIS) CPD model which was trialled in two case-study schools. The participants in these ‘action-research’ case-studies acted as co-researchers, who contributed to the development and evolution of the CPD model in each school. Analysis of the data gathered as part of the evaluation of the Whole-School, In- School (WSIS) model of CPD found an improved experience of science for children and improved confidence for teachers teaching at all levels of the primary school. In addition, a template for the establishment of a culture of collaborative CPD in schools has been developed from an analysis of the data

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper studies the problem of scheduling jobs in a two-machine open shop to minimize the makespan. Jobs are grouped into batches and are processed without preemption. A batch setup time on each machine is required before the first job is processed, and when a machine switches from processing a job in some batch to a job of another batch. For this NP-hard problem, we propose a linear-time heuristic algorithm that creates a group technology schedule, in which no batch is split into sub-batches. We demonstrate that our heuristic is a -approximation algorithm. Moreover, we show that no group technology algorithm can guarantee a worst-case performance ratio less than 5/4.