896 resultados para computation- and data-intensive applications


Relevância:

100.00% 100.00%

Publicador:

Resumo:

With security and surveillance, there is an increasing need to process image data efficiently and effectively either at source or in a large data network. Whilst a Field-Programmable Gate Array (FPGA) has been seen as a key technology for enabling this, the design process has been viewed as problematic in terms of the time and effort needed for implementation and verification. The work here proposes a different approach of using optimized FPGA-based soft-core processors which allows the user to exploit the task and data level parallelism to achieve the quality of dedicated FPGA implementations whilst reducing design time. The paper also reports some preliminary
progress on the design flow to program the structure. An implementation for a Histogram of Gradients algorithm is also reported which shows that a performance of 328 fps can be achieved with this design approach, whilst avoiding the long design time, verification and debugging steps associated with conventional FPGA implementations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Educational systems worldwide are facing an enormous shift as a result of sociocultural, political, economic, and technological changes. The technologies and practices that have developed over the last decade have been heralded as opportunities to transform both online and traditional education systems. While proponents of these new ideas often postulate that they have the potential to address the educational problems facing both students and institutions and that they could provide an opportunity to rethink the ways that education is organized and enacted, there is little evidence of emerging technologies and practices in use in online education. Because researchers and practitioners interested in these possibilities often reside in various disciplines and academic departments the sharing and dissemination of their work across often rigid boundaries is a formidable task. Contributors to Emergence and Innovation in Digital Learning include individuals who are shaping the future of online learning with their innovative applications and investigations on the impact of issues such as openness, analytics, MOOCs, and social media. Building on work first published in Emerging Technologies in Distance Education, the contributors to this collection harness the dispersed knowledge in online education to provide a one-stop locale for work on emergent approaches in the field. Their conclusions will influence the adoption and success of these approaches to education and will enable researchers and practitioners to conceptualize, critique, and enhance their understanding of the foundations and applications of new technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This project is aimed at making comparison between current existing Internet- of-Things (IoT) platforms, SensibleThings (ST) and Global Sensors Networks (GSN). Project can be served as a further work of platforms’ investigation. Comparing and learning from each other aim to contribute to the improvement of future platforms development. Detailed comparison is mainly with the respect of platform feature, communication and data present-frequency performance under stress, and platform node scalability performance on one limited device. Study is conducted through developing applications on each platform, and making measuring performance under the same condition in household network environment. So far, all these respects have had results and been concluded. Qualitatively comparing, GSN performs better in the facets of node’s swift development and deployment, data management, node subscription and connection retry mechanism. Whereas, ST is superior in respects of network package encryption, platform reliability, session initializing latency, and degree of developing freedom. In quantitative comparison, nodes on GSN has better data push pressure resistence while ST nodes works with lower session latency. In terms of data present-frequency, ST node can reach higher updating frequency than GSN node. In the aspect of node sclability on one limited device, ST nodes take the advantage in averagely lower latency than GSN node when nodes number is less than 15 on limited device. But due to sharing mechanism of GSN, on one limited device, it's nodes shows more scalable if platform nodes have similar job.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Research in the fields of ceramic pigments is oriented towards the enlargement of the chromatic set of colors together with a replacement for more expensive and less stable organic pigments. Novel non-toxic inorganic pigments have been required to answer environmental laws to remove elements like lead, chromium, cobalt entering in the composition of usual pigments widely used in paints and plastics. Yellow is particularly an important color in the pigment industry and consumption of yellow exceeds that of any other colored pigments. Apart from this, high infrared reflective pigments are now in great demand for usage in coatings, cement pavements, automotives and camouflage applications. They not only impart color to an object, but also reflect the invisible heat from the object to minimize heat build–up, when exposed to solar radiation. With this in view, the present work aims at developing new functional yellow pigments for these applications. A series of IR reflecting yellow pigments have been synthesized and analyzed for their crystalline structure, morphological, composition and optical characteristics, coloring and energy saving applications

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Heading into the 2020s, Physics and Astronomy are undergoing experimental revolutions that will reshape our picture of the fabric of the Universe. The Large Hadron Collider (LHC), the largest particle physics project in the world, produces 30 petabytes of data annually that need to be sifted through, analysed, and modelled. In astrophysics, the Large Synoptic Survey Telescope (LSST) will be taking a high-resolution image of the full sky every 3 days, leading to data rates of 30 terabytes per night over ten years. These experiments endeavour to answer the question why 96% of the content of the universe currently elude our physical understanding. Both the LHC and LSST share the 5-dimensional nature of their data, with position, energy and time being the fundamental axes. This talk will present an overview of the experiments and data that is gathered, and outlines the challenges in extracting information. Common strategies employed are very similar to industrial data! Science problems (e.g., data filtering, machine learning, statistical interpretation) and provide a seed for exchange of knowledge between academia and industry. Speaker Biography Professor Mark Sullivan Mark Sullivan is a Professor of Astrophysics in the Department of Physics and Astronomy. Mark completed his PhD at Cambridge, and following postdoctoral study in Durham, Toronto and Oxford, now leads a research group at Southampton studying dark energy using exploding stars called "type Ia supernovae". Mark has many years' experience of research that involves repeatedly imaging the night sky to track the arrival of transient objects, involving significant challenges in data handling, processing, classification and analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

100.00% 100.00%

Publicador:

Resumo:

GaN, InP and GaAs nanowires were investigated for piezoelectric response. Nanowires and structures based on them can find wide applications in areas purposes such as nanogenarators, nanodrives, Solar cells and other perspective areas. Experemental measurements were carried out on AFM Bruker multimode 8 and data was handled with Nanoscope software. AFM techniques permitted not only to visualize the surface topography, but also to show distribution of piezoresponse and allowed to calculate its properties. The calculated values are in the same range as published by other authors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The impact of cancer upon children, teenagers and young people can be profound. Research has been undertaken to explore the impacts upon children, teenagers and young people with cancer, but little is known about how researchers can ‘best’ engage with this group to explore their experiences. This review paper provides an overview of the utility of data collection methods employed when undertaking research with children, teenagers and young people. A systematic review of relevant databases was undertaken utilising the search terms ‘young people’, ‘young adult’, ‘adolescent’ and ‘data collection methods’. The full-text of the papers that were deemed eligible from the title and abstract were accessed and following discussion within the research team, thirty papers were included. Findings: Due to the heterogeneity in terms of the scope of the papers identified the following data collections methods were included in the results section. Three of the papers identified provided an overview of data collection methods utilised with this population and the remaining twenty seven papers covered the following data collection methods: Digital technologies; art based research; comparing the use of ‘paper and pencil’ research with web-based technologies, the use of games; the use of a specific communication tool; questionnaires and interviews; focus groups and telephone interviews/questionnaires. The strengths and limitations of the range of data collection methods included are discussed drawing upon such issues as of the appropriateness of particular methods for particular age groups, or the most appropriate method to employ when exploring a particularly sensitive topic area. Conclusions: There are a number of data collection methods utilised to undertaken research with children, teenagers and young adults. This review provides a summary of the current available evidence and an overview of the strengths and limitations of data collection methods employed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Internet users consume online targeted advertising based on information collected about them and voluntarily share personal information in social networks. Sensor information and data from smart-phones is collected and used by applications, sometimes in unclear ways. As it happens today with smartphones, in the near future sensors will be shipped in all types of connected devices, enabling ubiquitous information gathering from the physical environment, enabling the vision of Ambient Intelligence. The value of gathered data, if not obvious, can be harnessed through data mining techniques and put to use by enabling personalized and tailored services as well as business intelligence practices, fueling the digital economy. However, the ever-expanding information gathering and use undermines the privacy conceptions of the past. Natural social practices of managing privacy in daily relations are overridden by socially-awkward communication tools, service providers struggle with security issues resulting in harmful data leaks, governments use mass surveillance techniques, the incentives of the digital economy threaten consumer privacy, and the advancement of consumergrade data-gathering technology enables new inter-personal abuses. A wide range of fields attempts to address technology-related privacy problems, however they vary immensely in terms of assumptions, scope and approach. Privacy of future use cases is typically handled vertically, instead of building upon previous work that can be re-contextualized, while current privacy problems are typically addressed per type in a more focused way. Because significant effort was required to make sense of the relations and structure of privacy-related work, this thesis attempts to transmit a structured view of it. It is multi-disciplinary - from cryptography to economics, including distributed systems and information theory - and addresses privacy issues of different natures. As existing work is framed and discussed, the contributions to the state-of-theart done in the scope of this thesis are presented. The contributions add to five distinct areas: 1) identity in distributed systems; 2) future context-aware services; 3) event-based context management; 4) low-latency information flow control; 5) high-dimensional dataset anonymity. Finally, having laid out such landscape of the privacy-preserving work, the current and future privacy challenges are discussed, considering not only technical but also socio-economic perspectives.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

New methods of nuclear fuel and cladding characterization must be developed and implemented to enhance the safety and reliability of nuclear power plants. One class of such advanced methods is aimed at the characterization of fuel performance by performing minimally intrusive in-core, real time measurements on nuclear fuel on the nanometer scale. Nuclear power plants depend on instrumentation and control systems for monitoring, control and protection. Traditionally, methods for fuel characterization under irradiation are performed using a “cook and look” method. These methods are very expensive and labor-intensive since they require removal, inspection and return of irradiated samples for each measurement. Such fuel cladding inspection methods investigate oxide layer thickness, wear, dimensional changes, ovality, nuclear fuel growth and nuclear fuel defect identification. These methods are also not suitable for all commercial nuclear power applications as they are not always available to the operator when needed. Additionally, such techniques often provide limited data and may exacerbate the phenomena being investigated. This thesis investigates a novel, nanostructured sensor based on a photonic crystal design that is implemented in a nuclear reactor environment. The aim of this work is to produce an in-situ radiation-tolerant sensor capable of measuring the deformation of a nuclear material during nuclear reactor operations. The sensor was fabricated on the surface of nuclear reactor materials (specifically, steel and zirconium based alloys). Charged-particle and mixed-field irradiations were both performed on a newly-developed “pelletron” beamline at Idaho State University's Research and Innovation in Science and Engineering (RISE) complex and at the University of Maryland's 250 kW Training Reactor (MUTR). The sensors were irradiated to 6 different fluences (ranging from 1 to 100 dpa), followed by intensive characterization using focused ion beam (FIB), transmission electron microscopy (TEM) and scanning electron microscopy (SEM) to investigate the physical deformation and microstructural changes between different fluence levels, to provide high-resolution information regarding the material performance. Computer modeling (SRIM/TRIM) was employed to simulate damage to the sensor as well as to provide significant information concerning the penetration depth of the ions into the material.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last thirty years, the emergence and progression of biologging technology has led to great advances in marine predator ecology. Large databases of location and dive observations from biologging devices have been compiled for an increasing number of diving predator species (such as pinnipeds, sea turtles, seabirds and cetaceans), enabling complex questions about animal activity budgets and habitat use to be addressed. Central to answering these questions is our ability to correctly identify and quantify the frequency of essential behaviours, such as foraging. Despite technological advances that have increased the quality and resolution of location and dive data, accurately interpreting behaviour from such data remains a challenge, and analytical methods are only beginning to unlock the full potential of existing datasets. This review evaluates both traditional and emerging methods and presents a starting platform of options for future studies of marine predator foraging ecology, particularly from location and two-dimensional (time-depth) dive data. We outline the different devices and data types available, discuss the limitations and advantages of commonly-used analytical techniques, and highlight key areas for future research. We focus our review on pinnipeds - one of the most studied taxa of marine predators - but offer insights that will be applicable to other air-breathing marine predator tracking studies. We highlight that traditionally-used methods for inferring foraging from location and dive data, such as first-passage time and dive shape analysis, have important caveats and limitations depending on the nature of the data and the research question. We suggest that more holistic statistical techniques, such as state-space models, which can synthesise multiple track, dive and environmental metrics whilst simultaneously accounting for measurement error, offer more robust alternatives. Finally, we identify a need for more research to elucidate the role of physical oceanography, device effects, study animal selection, and developmental stages in predator behaviour and data interpretation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Power system engineers face a double challenge: to operate electric power systems within narrow stability and security margins, and to maintain high reliability. There is an acute need to better understand the dynamic nature of power systems in order to be prepared for critical situations as they arise. Innovative measurement tools, such as phasor measurement units, can capture not only the slow variation of the voltages and currents but also the underlying oscillations in a power system. Such dynamic data accessibility provides us a strong motivation and a useful tool to explore dynamic-data driven applications in power systems. To fulfill this goal, this dissertation focuses on the following three areas: Developing accurate dynamic load models and updating variable parameters based on the measurement data, applying advanced nonlinear filtering concepts and technologies to real-time identification of power system models, and addressing computational issues by implementing the balanced truncation method. By obtaining more realistic system models, together with timely updated parameters and stochastic influence consideration, we can have an accurate portrait of the ongoing phenomena in an electrical power system. Hence we can further improve state estimation, stability analysis and real-time operation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background and Objectives: Both psychiatric acute units and psychiatric intensive care units (PICUs) focus on acute treatment of behavioral disturbances such as violence and aggressive threats and acts. The aim of the present study is to describe the frequency of violent behavior; such as verbal or physical threats and physical attacks, among patients admitted to psychiatric intensive care unit (PICU). In addition the relationship between the episodes of threats and/or attacks in relation to time of the day, days of the week, and their seasonal variations was explored. Methods: All violent behavior was continuously assessed at the psychiatric emergency department. Data were collected during the period from May 2010 to May 2012. Results: Patients with only one hospitalization were less violent than those who have had two hospitalizations. There was a statistically significant difference in violence among patients without formal secondary education and those who have not formal education. Violent behavior showed two peaks during the day; the first occurring at 1 pm and the second at 8 pm. In regard to seasonality, summer had a higher incidence of violence. The most peaceful seasons of the year were spring and autumn. Conclusions: Violent behavior shows variation in daytime, days of the week and season in acute psychiatric intensive care. Daytime variation shows two peaks of violence at 1 pm and 8 pm, Sundays and Wednesdays being the quietest days regarding violence both in winter and summer. Patient's level of education and hospitalization status partially explain the variation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The evolution of cellular systems towards third generation (3G) or IMT-2000 seems to have a tendency to use W-CDMA as the standard access method, as ETSI decisions have showed. However, there is a question about the improvements in capacity and the wellness of this access method. One of the aspects that worry developers and researchers planning the third generation is the extended use of the Internet and more and more bandwidth hungry applications. This work shows the performance of a W-CDMA system simulated in a PC using cover maps generated with DC-Cell, a GIS based planning tool developed by the Technical University of Valencia, Spain. The maps are exported to MATLAB and used in the model. The system used consists of several microcells in a downtown area. We analyse the interference from users in the same cell and in adjacent cells and the effect in the system, assuming perfect control for each cell. The traffic generated by the simulator is voice and data. This model allows us to work with coverage that is more accurate and is a good approach to analyse the multiple access interference (MAI) problem in microcellular systems with irregular coverage. Finally, we compare the results obtained, with the performance of a similar system using TDMA.