926 resultados para Computer storage devices
Resumo:
Bulk gallium nitride (GaN) power semiconductor devices are gaining significant interest in recent years, creating the need for technology computer aided design (TCAD) simulation to accurately model and optimize these devices. This paper comprehensively reviews and compares different GaN physical models and model parameters in the literature, and discusses the appropriate selection of these models and parameters for TCAD simulation. 2-D drift-diffusion semi-classical simulation is carried out for 2.6 kV and 3.7 kV bulk GaN vertical PN diodes. The simulated forward current-voltage and reverse breakdown characteristics are in good agreement with the measurement data even over a wide temperature range.
Resumo:
The physical appearance and behavior of a robot is an important asset in terms of Human-Computer Interaction. Multimodality is also fundamental, as we humans usually expect to interact in a natural way with voice, gestures, etc. People approach complex interaction devices with stances similar to those used in their interaction with other people. In this paper we describe a robot head, currently under development, that aims to be a multimodal (vision, voice, gestures,...) perceptual user interface.
Resumo:
COSTA, Umberto Souza; MOREIRA, Anamaria Martins; MUSICANTE, Matin A.; SOUZA NETO, Plácido A. JCML: A specification language for the runtime verification of Java Card programs. Science of Computer Programming. [S.l]: [s.n], 2010.
Resumo:
COSTA, Umberto Souza da; MOREIRA, Anamaria Martins; MUSICANTE, Martin A. Specification and Runtime Verification of Java Card Programs. Electronic Notes in Theoretical Computer Science. [S.l:s.n], 2009.
Resumo:
Computer games are significant since they embody our youngsters’ engagement with contemporary culture, including both play and education. These games rely heavily on visuals, systems of sign and expression based on concepts and principles of Art and Architecture. We are researching a new genre of computer games, ‘Educational Immersive Environments’ (EIEs) to provide educational materials suitable for the school classroom. Close collaboration with subject teachers is necessary, but we feel a specific need to engage with the practicing artist, the art theoretician and historian. Our EIEs are loaded with multimedia (but especially visual) signs which act to direct the learner and provide the ‘game-play’ experience forming semiotic systems. We suggest the hypothesis that computer games are a space of deconstruction and reconstruction (DeRe): When players enter the game their physical world and their culture is torn apart; they move in a semiotic system which serves to reconstruct an alternate reality where disbelief is suspended. The semiotic system draws heavily on visuals which direct the players’ interactions and produce motivating gameplay. These can establish a reconstructed culture and emerging game narrative. We have recently tested our hypothesis and have used this in developing design principles for computer game designers. Yet there are outstanding issues concerning the nature of the visuals used in computer games, and so questions for contemporary artists. Currently, the computer game industry employs artists in a ‘classical’ role in production of concept sketches, storyboards and 3D content. But this is based on a specification from the client which restricts the artist in intellectual freedom. Our DeRe hypothesis places the artist at the generative centre, to inform the game designer how art may inform our DeRe semiotic spaces. This must of course begin with the artists’ understanding of DeRe in this time when our ‘identities are becoming increasingly fractured, networked, virtualized and distributed’ We hope to persuade artists to engage with the medium of computer game technology to explore these issues. In particular, we pose several questions to the artist: (i) How can particular ‘periods’ in art history be used to inform the design of computer games? (ii) How can specific artistic elements or devices be used to design ‘signs’ to guide the player through the game? (iii) How can visual material be integrated with other semiotic strata such as text and audio?
Resumo:
Understanding the dynamics of blood cells is a crucial element to discover biological mechanisms, to develop new efficient drugs, design sophisticated microfluidic devices, for diagnostics. In this work, we focus on the dynamics of red blood cells in microvascular flow. Microvascular blood flow resistance has a strong impact on cardiovascular function and tissue perfusion. The flow resistance in microcirculation is governed by flow behavior of blood through a complex network of vessels, where the distribution of red blood cells across vessel cross-sections may be significantly distorted at vessel bifurcations and junctions. We investigate the development of blood flow and its resistance starting from a dispersed configuration of red blood cells in simulations for different hematocrits, flow rates, vessel diameters, and aggregation interactions between red blood cells. Initially dispersed red blood cells migrate toward the vessel center leading to the formation of a cell-free layer near the wall and to a decrease of the flow resistance. The development of cell-free layer appears to be nearly universal when scaled with a characteristic shear rate of the flow, which allows an estimation of the length of a vessel required for full flow development, $l_c \approx 25D$, with vessel diameter $D$. Thus, the potential effect of red blood cell dispersion at vessel bifurcations and junctions on the flow resistance may be significant in vessels which are shorter or comparable to the length $l_c$. The presence of aggregation interactions between red blood cells lead in general to a reduction of blood flow resistance. The development of the cell-free layer thickness looks similar for both cases with and without aggregation interactions. Although, attractive interactions result in a larger cell-free layer plateau values. However, because the aggregation forces are short-ranged at high enough shear rates ($\bar{\dot{\gamma}} \gtrsim 50~\text{s}^{-1}$) aggregation of red blood cells does not bring a significant change to the blood flow properties. Also, we develop a simple theoretical model which is able to describe the converged cell-free-layer thickness with respect to flow rate assuming steady-state flow. The model is based on the balance between a lift force on red blood cells due to cell-wall hydrodynamic interactions and shear-induced effective pressure due to cell-cell interactions in flow. We expect that these results can also be used to better understand the flow behavior of other suspensions of deformable particles such as vesicles, capsules, and cells. Finally, we investigate segregation phenomena in blood as a two-component suspension under Poiseuille flow, consisting of red blood cells and target cells. The spatial distribution of particles in blood flow is very important. For example, in case of nanoparticle drug delivery, the particles need to come closer to microvessel walls, in order to adhere and bring the drug to a target position within the microvasculature. Here we consider that segregation can be described as a competition between shear-induced diffusion and the lift force that pushes every soft particle in a flow away from the wall. In order to investigate the segregation, on one hand, we have 2D DPD simulations of red blood cells and target cell of different sizes, on the other hand the Fokker-Planck equation for steady state. For the equation we measure force profile, particle distribution and diffusion constant across the channel. We compare simulation results with those from the Fokker-Planck equation and find a very good correspondence between the two approaches. Moreover, we investigate the diffusion behavior of target particles for different hematocrit values and shear rates. Our simulation results indicate that diffusion constant increases with increasing hematocrit and depends linearly on shear rate. The third part of the study describes development of a simulation model of complex vascular geometries. The development of the model is important to reproduce vascular systems of small pieces of tissues which might be gotten from MRI or microscope images. The simulation model of the complex vascular systems might be divided into three parts: modeling the geometry, developing in- and outflow boundary conditions, and simulation domain decomposition for an efficient computation. We have found that for the in- and outflow boundary conditions it is better to use the SDPD fluid than DPD one because of the density fluctuations along the channel of the latter. During the flow in a straight channel, it is difficult to control the density of the DPD fluid. However, the SDPD fluid has not that shortcoming even in more complex channels with many branches and in- and outflows because the force acting on particles is calculated also depending on the local density of the fluid.
Resumo:
In recent years, higher cadence, higher resolution observations have revealed the quiet-Sun photosphere to be complex and rapidly evolving. Since magnetic fields anchored in the photosphere extend up into the solar corona, it is expected that the small-scale coronal magnetic field exhibits similar complexity. For the first time, the quiet-Sun coronal magnetic field is continuously evolved through a series of non-potential, quasi-static equilibria, deduced from magnetograms observed by the Helioseismic and Magnetic Imager on board the Solar Dynamics Observatory, where the photospheric boundary condition which drives the coronal evolution exactly reproduces the observed magnetograms. The build-up, storage, and dissipation of magnetic energy within the simulations is studied. We find that the free magnetic energy built up and stored within the field is sufficient to explain small-scale, impulsive events such as nanoflares. On comparing with coronal images of the same region, the energy storage and dissipation visually reproduces many of the observed features. The results indicate that the complex small-scale magnetic evolution of a large number of magnetic features is a key element in explaining the nature of the solar corona.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
COSTA, Umberto Souza; MOREIRA, Anamaria Martins; MUSICANTE, Matin A.; SOUZA NETO, Plácido A. JCML: A specification language for the runtime verification of Java Card programs. Science of Computer Programming. [S.l]: [s.n], 2010.
Resumo:
COSTA, Umberto Souza da; MOREIRA, Anamaria Martins; MUSICANTE, Martin A. Specification and Runtime Verification of Java Card Programs. Electronic Notes in Theoretical Computer Science. [S.l:s.n], 2009.
Resumo:
Nowadays there is almost no crime committed without a trace of digital evidence, and since the advanced functionality of mobile devices today can be exploited to assist in crime, the need for mobile forensics is imperative. Many of the mobile applications available today, including internet browsers, will request the user’s permission to access their current location when in use. This geolocation data is subsequently stored and managed by that application's underlying database files. If recovered from a device during a forensic investigation, such GPS evidence and track points could hold major evidentiary value for a case. The aim of this paper is to examine and compare to what extent geolocation data is available from the iOS and Android operating systems. We focus particularly on geolocation data recovered from internet browsing applications, comparing the native Safari and Browser apps with Google Chrome, downloaded on to both platforms. All browsers were used over a period of several days at various locations to generate comparable test data for analysis. Results show considerable differences not only in the storage locations and formats, but also in the amount of geolocation data stored by different browsers and on different operating systems.
Resumo:
The number of connected devices collecting and distributing real-world information through various systems, is expected to soar in the coming years. As the number of such connected devices grows, it becomes increasingly difficult to store and share all these new sources of information. Several context representation schemes try to standardize this information, but none of them have been widely adopted. In previous work we addressed this challenge, however our solution had some drawbacks: poor semantic extraction and scalability. In this paper we discuss ways to efficiently deal with representation schemes' diversity and propose a novel d-dimension organization model. Our evaluation shows that d-dimension model improves scalability and semantic extraction.
Resumo:
The value of integrating a heat storage into a geothermal district heating system has been investigated. The behaviour of the system under a novel operational strategy has been simulated focusing on the energetic, economic and environmental effects of the new strategy of incorporation of the heat storage within the system. A typical geothermal district heating system consists of several production wells, a system of pipelines for the transportation of the hot water to end-users, one or more re-injection wells and peak-up devices (usually fossil-fuel boilers). Traditionally in these systems, the production wells change their production rate throughout the day according to heat demand, and if their maximum capacity is exceeded the peak-up devices are used to meet the balance of the heat demand. In this study, it is proposed to maintain a constant geothermal production and add heat storage into the network. Subsequently, hot water will be stored when heat demand is lower than the production and the stored hot water will be released into the system to cover the peak demands (or part of these). It is not intended to totally phase-out the peak-up devices, but to decrease their use, as these will often be installed anyway for back-up purposes. Both the integration of a heat storage in such a system as well as the novel operational strategy are the main novelties of this thesis. A robust algorithm for the sizing of these systems has been developed. The main inputs are the geothermal production data, the heat demand data throughout one year or more and the topology of the installation. The outputs are the sizing of the whole system, including the necessary number of production wells, the size of the heat storage and the dimensions of the pipelines amongst others. The results provide several useful insights into the initial design considerations for these systems, emphasizing particularly the importance of heat losses. Simulations are carried out for three different cases of sizing of the installation (small, medium and large) to examine the influence of system scale. In the second phase of work, two algorithms are developed which study in detail the operation of the installation throughout a random day and a whole year, respectively. The first algorithm can be a potentially powerful tool for the operators of the installation, who can know a priori how to operate the installation on a random day given the heat demand. The second algorithm is used to obtain the amount of electricity used by the pumps as well as the amount of fuel used by the peak-up boilers over a whole year. These comprise the main operational costs of the installation and are among the main inputs of the third part of the study. In the third part of the study, an integrated energetic, economic and environmental analysis of the studied installation is carried out together with a comparison with the traditional case. The results show that by implementing heat storage under the novel operational strategy, heat is generated more cheaply as all the financial indices improve, more geothermal energy is utilised and less fuel is used in the peak-up boilers, with subsequent environmental benefits, when compared to the traditional case. Furthermore, it is shown that the most attractive case of sizing is the large one, although the addition of the heat storage most greatly impacts the medium case of sizing. In other words, the geothermal component of the installation should be sized as large as possible. This analysis indicates that the proposed solution is beneficial from energetic, economic, and environmental perspectives. Therefore, it can be stated that the aim of this study is achieved in its full potential. Furthermore, the new models for the sizing, operation and economic/energetic/environmental analyses of these kind of systems can be used with few adaptations for real cases, making the practical applicability of this study evident. Having this study as a starting point, further work could include the integration of these systems with end-user demands, further analysis of component parts of the installation (such as the heat exchangers) and the integration of a heat pump to maximise utilisation of geothermal energy.
Resumo:
Embedded software systems in vehicles are of rapidly increasing commercial importance for the automotive industry. Current systems employ a static run-time environment; due to the difficulty and cost involved in the development of dynamic systems in a high-integrity embedded control context. A dynamic system, referring to the system configuration, would greatly increase the flexibility of the offered functionality and enable customised software configuration for individual vehicles, adding customer value through plug-and-play capability, and increased quality due to its inherent ability to adjust to changes in hardware and software. We envisage an automotive system containing a variety of components, from a multitude of organizations, not necessarily known at development time. The system dynamically adapts its configuration to suit the run-time system constraints. This paper presents our vision for future automotive control systems that will be regarded in an EU research project, referred to as DySCAS (Dynamically Self-Configuring Automotive Systems). We propose a self-configuring vehicular control system architecture, with capabilities that include automatic discovery and inclusion of new devices, self-optimisation to best-use the processing, storage and communication resources available, self-diagnostics and ultimately self-healing. Such an architecture has benefits extending to reduced development and maintenance costs, improved passenger safety and comfort, and flexible owner customisation. Specifically, this paper addresses the following issues: The state of the art of embedded software systems in vehicles, emphasising the current limitations arising from fixed run-time configurations; and the benefits and challenges of dynamic configuration, giving rise to opportunities for self-healing, self-optimisation, and the automatic inclusion of users’ Consumer Electronic (CE) devices. Our proposal for a dynamically reconfigurable automotive software system platform is outlined and a typical use-case is presented as an example to exemplify the benefits of the envisioned dynamic capabilities.
Resumo:
In modern power electronics equipment, it is desirable to design a low profile, high power density, and fast dynamic response converter. Increases in switching frequency reduce the size of the passive components such as transformers, inductors, and capacitors which results in compact size and less requirement for the energy storage. In addition, the fast dynamic response can be achieved by operating at high frequency. However, achieving high frequency operation while keeping the efficiency high, requires new advanced devices, higher performance magnetic components, and new circuit topology. These are required to absorb and utilize the parasitic components and also to mitigate the frequency dependent losses including switching loss, gating loss, and magnetic loss. Required performance improvements can be achieved through the use of Radio Frequency (RF) design techniques. To reduce switching losses, resonant converter topologies like resonant RF amplifiers (inverters) combined with a rectifier are the effective solution to maintain high efficiency at high switching frequencies through using the techniques such as device parasitic absorption, Zero Voltage Switching (ZVS), Zero Current Switching (ZCS), and a resonant gating. Gallium Nitride (GaN) device technologies are being broadly used in RF amplifiers due to their lower on- resistance and device capacitances compared with silicon (Si) devices. Therefore, this kind of semiconductor is well suited for high frequency power converters. The major problems involved with high frequency magnetics are skin and proximity effects, increased core and copper losses, unbalanced magnetic flux distribution generating localized hot spots, and reduced coupling coefficient. In order to eliminate the magnetic core losses which play a crucial role at higher operating frequencies, a coreless PCB transformer can be used. Compared to the conventional wire-wound transformer, a planar PCB transformer in which the windings are laid on the Printed Board Circuit (PCB) has a low profile structure, excellent thermal characteristics, and ease of manufacturing. Therefore, the work in this thesis demonstrates the design and analysis of an isolated low profile class DE resonant converter operating at 10 MHz switching frequency with a nominal output of 150 W. The power stage consists of a class DE inverter using GaN devices along with a sinusoidal gate drive circuit on the primary side and a class DE rectifier on the secondary side. For obtaining the stringent height converter, isolation is provided by a 10-layered coreless PCB transformer of 1:20 turn’s ratio. It is designed and optimized using 3D Finite Element Method (FEM) tools and radio frequency (RF) circuit design software. Simulation and experimental results are presented for a 10-layered coreless PCB transformer operating in 10 MHz.