963 resultados para dynamic digital displays


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Persistent daily congestion has been increasing in recent years, particularly along major corridors during selected periods in the mornings and evenings. On certain segments, these roadways are often at or near capacity. However, a conventional Predefined control strategy did not fit the demands that changed over time, making it necessary to implement the various dynamical lane management strategies discussed in this thesis. Those strategies include hard shoulder running, reversible HOV lanes, dynamic tolls and variable speed limit. A mesoscopic agent-based DTA model is used to simulate different strategies and scenarios. From the analyses, all strategies aim to mitigate congestion in terms of the average speed and average density. The largest improvement can be found in hard shoulder running and reversible HOV lanes while the other two provide more stable traffic. In terms of average speed and travel time, hard shoulder running is the most congested strategy for I-270 to help relieve the traffic pressure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Teleoperation remains an important aspect for robotic systems especially when deployed in unstructured environments. While a range of research strives for robots that are completely autonomous, many robotic applications still require some level of human-in-The-loop control. In any situation where teleoperation is required an effective User Interface (UI) remains a key component within the systems design. Current advancements in Virtual Reality (VR) software and hardware such as the Oculus Rift, HTC Vive and Google Cardboard combined with greater transparency to robotic systems afforded by middleware such as the Robot Operating System (ROS) provides an opportunity to rapidly improve traditional teleoperation interfaces. This paper uses a System of System (SoS) approach to present the concept of a Virtual Reality Dynamic User Interface (VRDUI) for the teleoperation of heterogeneous robots. Different geometric virtual workspaces are discussed and a cylindrical workspace aligned with interactive displays is presented as a virtual control room. A presentation mode within the proposed VRDUI is also detailed, this shows how point cloud information obtained from the Microsoft Kinect can be incorporated within the proposed virtual workspace. This point cloud data is successfully processed into an OctoMap utilizing the octree data structure to create a voxelized representation of the 3D scanned environment. The resulting OctoMap is then displayed to an operator as a 3D point cloud using the Oculus Rift Head Mounted Display (HMD).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimating un-measurable states is an important component for onboard diagnostics (OBD) and control strategy development in diesel exhaust aftertreatment systems. This research focuses on the development of an Extended Kalman Filter (EKF) based state estimator for two of the main components in a diesel engine aftertreatment system: the Diesel Oxidation Catalyst (DOC) and the Selective Catalytic Reduction (SCR) catalyst. One of the key areas of interest is the performance of these estimators when the catalyzed particulate filter (CPF) is being actively regenerated. In this study, model reduction techniques were developed and used to develop reduced order models from the 1D models used to simulate the DOC and SCR. As a result of order reduction, the number of states in the estimator is reduced from 12 to 1 per element for the DOC and 12 to 2 per element for the SCR. The reduced order models were simulated on the experimental data and compared to the high fidelity model and the experimental data. The results show that the effect of eliminating the heat transfer and mass transfer coefficients are not significant on the performance of the reduced order models. This is shown by an insignificant change in the kinetic parameters between the reduced order and 1D model for simulating the experimental data. An EKF based estimator to estimate the internal states of the DOC and SCR was developed. The DOC and SCR estimators were simulated on the experimental data to show that the estimator provides improved estimation of states compared to a reduced order model. The results showed that using the temperature measurement at the DOC outlet improved the estimates of the CO , NO , NO2 and HC concentrations from the DOC. The SCR estimator was used to evaluate the effect of NH3 and NOX sensors on state estimation quality. Three sensor combinations of NOX sensor only, NH3 sensor only and both NOX and NH3 sensors were evaluated. The NOX only configuration had the worst performance, the NH3 sensor only configuration was in the middle and both the NOX and NH3 sensor combination provided the best performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional decision making research has often focused on one's ability to choose from a set of prefixed options, ignoring the process by which decision makers generate courses of action (i.e., options) in-situ (Klein, 1993). In complex and dynamic domains, this option generation process is particularly critical to understanding how successful decisions are made (Zsambok & Klein, 1997). When generating response options for oneself to pursue (i.e., during the intervention-phase of decision making) previous research has supported quick and intuitive heuristics, such as the Take-The-First heuristic (TTF; Johnson & Raab, 2003). When generating predictive options for others in the environment (i.e., during the assessment-phase of decision making), previous research has supported the situational-model-building process described by Long Term Working Memory theory (LTWM; see Ward, Ericsson, & Williams, 2013). In the first three experiments, the claims of TTF and LTWM are tested during assessment- and intervention-phase tasks in soccer. To test what other environmental constraints may dictate the use of these cognitive mechanisms, the claims of these models are also tested in the presence and absence of time pressure. In addition to understanding the option generation process, it is important that researchers in complex and dynamic domains also develop tools that can be used by `real-world' professionals. For this reason, three more experiments were conducted to evaluate the effectiveness of a new online assessment of perceptual-cognitive skill in soccer. This test differentiated between skill groups and predicted performance on a previously established test and predicted option generation behavior. The test also outperformed domain-general cognitive tests, but not a domain-specific knowledge test when predicting skill group membership. Implications for theory and training, and future directions for the development of applied tools are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Managed lane strategies are innovative road operation schemes for addressing congestion problems. These strategies operate a lane (lanes) adjacent to a freeway that provides congestion-free trips to eligible users, such as transit or toll-payers. To ensure the successful implementation of managed lanes, the demand on these lanes need to be accurately estimated. Among different approaches for predicting this demand, the four-step demand forecasting process is most common. Managed lane demand is usually estimated at the assignment step. Therefore, the key to reliably estimating the demand is the utilization of effective assignment modeling processes. Managed lanes are particularly effective when the road is functioning at near-capacity. Therefore, capturing variations in demand and network attributes and performance is crucial for their modeling, monitoring and operation. As a result, traditional modeling approaches, such as those used in static traffic assignment of demand forecasting models, fail to correctly predict the managed lane demand and the associated system performance. The present study demonstrates the power of the more advanced modeling approach of dynamic traffic assignment (DTA), as well as the shortcomings of conventional approaches, when used to model managed lanes in congested environments. In addition, the study develops processes to support an effective utilization of DTA to model managed lane operations. Static and dynamic traffic assignments consist of demand, network, and route choice model components that need to be calibrated. These components interact with each other, and an iterative method for calibrating them is needed. In this study, an effective standalone framework that combines static demand estimation and dynamic traffic assignment has been developed to replicate real-world traffic conditions. With advances in traffic surveillance technologies collecting, archiving, and analyzing traffic data is becoming more accessible and affordable. The present study shows how data from multiple sources can be integrated, validated, and best used in different stages of modeling and calibration of managed lanes. Extensive and careful processing of demand, traffic, and toll data, as well as proper definition of performance measures, result in a calibrated and stable model, which closely replicates real-world congestion patterns, and can reasonably respond to perturbations in network and demand properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Top predators can have large effects on community and population dynamics but we still know relatively little about their roles in ecosystems and which biotic and abiotic factors potentially affect their behavioral patterns. Understanding the roles played by top predators is a pressing issue because many top predator populations around the world are declining rapidly yet we do not fully understand what the consequences of their potential extirpation could be for ecosystem structure and function. In addition, individual behavioral specialization is commonplace across many taxa, but studies of its prevalence, causes, and consequences in top predator populations are lacking. In this dissertation I investigated the movement, feeding patterns, and drivers and implications of individual specialization in an American alligator (Alligator mississippiensis) population inhabiting a dynamic subtropical estuary. I found that alligator movement and feeding behaviors in this population were largely regulated by a combination of biotic and abiotic factors that varied seasonally. I also found that the population consisted of individuals that displayed an extremely wide range of movement and feeding behaviors, indicating that individual specialization is potentially an important determinant of the varied roles of alligators in ecosystems. Ultimately, I found that assuming top predator populations consist of individuals that all behave in similar ways in terms of their feeding, movements, and potential roles in ecosystems is likely incorrect. As climate change and ecosystem restoration and conservation activities continue to affect top predator populations worldwide, individuals will likely respond in different and possibly unexpected ways.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As research into the dynamic characteristics of job performance across time has continued to accumulate, associated implications for performance appraisal have become evident. At present, several studies have demonstrated that systematic trends in job performance across time influence how performance is ultimately judged. However, little research has considered the processes by which the performance trend-performance rating relationship occurs. In the present study, I addressed this gap. Specifically, drawing on attribution theory, I proposed and tested a model whereby the performance trend-performance rating relationship occurs through attributions to ability and effort. The results of this study indicated that attributions to ability, but not effort, mediate the relationship between performance trend and performance ratings and that this relationship depends on attribution-related cues. Implications for performance appraisal research and theory are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sampling and preconcentration techniques play a critical role in headspace analysis in analytical chemistry. My dissertation presents a novel sampling design, capillary microextraction of volatiles (CMV), that improves the preconcentration of volatiles and semivolatiles in a headspace with high throughput, near quantitative analysis, high recovery and unambiguous identification of compounds when coupled to mass spectrometry. The CMV devices use sol-gel polydimethylsiloxane (PDMS) coated microglass fibers as the sampling/preconcentration sorbent when these fibers are stacked into open-ended capillary tubes. The design allows for dynamic headspace sampling by connecting the device to a hand-held vacuum pump. The inexpensive device can be fitted into a thermal desorption probe for thermal desorption of the extracted volatile compounds into a gas chromatography-mass spectrometer (GC-MS). The performance of the CMV devices was compared with two other existing preconcentration techniques, solid phase microextraction (SPME) and planar solid phase microextraction (PSPME). Compared to SPME fibers, the CMV devices have an improved surface area and phase volume of 5000 times and 80 times, respectively. One (1) minute dynamic CMV air sampling resulted in similar performance as a 30 min static extraction using a SPME fiber. The PSPME devices have been fashioned to easily interface with ion mobility spectrometers (IMS) for explosives or drugs detection. The CMV devices are shown to offer dynamic sampling and can now be coupled to COTS GC-MS instruments. Several compound classes representing explosives have been analyzed with minimum breakthrough even after a 60 min. sampling time. The extracted volatile compounds were retained in the CMV devices when preserved in aluminum foils after sampling. Finally, the CMV sampling device were used for several different headspace profiling applications which involved sampling a shipping facility, six illicit drugs, seven military explosives and eighteen different bacteria strains. Successful detection of the target analytes at ng levels of the target signature volatile compounds in these applications suggests that the CMV devices can provide high throughput qualitative and quantitative analysis with high recovery and unambiguous identification of analytes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with proposal of a new dual stack approach for reducing both leakage and dynamic powers. The development of digital integrated circuits is challenged by higher power consumption. Thecombination of higher clock speeds, greater functional integration, and smaller process geometries has contributed to significant growth in power density. Scaling improves transistor density and functionality ona chip. Scaling helps to increase speed and frequency of operation and hence higher performance. As voltages scale downward with the geometries threshold voltages must also decrease to gain the performance advantages of the new technology but leakage current increases exponentially. Today leakage power has become anincreasingly important issue in processor hardware and software design. It can be used in various applications like digital VLSI clocking system, buffers, registers, microprocessors etc. The leakage power increases astechnology is scaled down. In this paper, we propose a new dual stack approach for reducing both leakage and dynamic powers. Moreover, the novel dual stack approach shows the least speed power product whencompared to the existing methods. All well known approach is “Sleep” in this method we reduce leakage power. The proposed Dual Stack approach we reduce more power leakage. Dual Stack approach uses theadvantage of using the two extra pull-up and two extra pull-down transistors in sleep mode either in OFF state or in ON state. Since the Dual Stack portion can be made common to all logic circuitry, less number of transistors is needed to apply a certain logic circuit.The dual stack approach shows the least speed power product among all methods. The Dual Stack technique provides new ways to designers who require ultra-low leakage power consumption with much less speedpower product.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Imaging Identity presents potent reflections on the human condition through the prism of portraiture. Taking digital imaging technologies and the dynamic and precarious dimensions of contemporary identity as critical reference points, these essays consider why portraits continue to have such galvanising appeal and perform fundamental work across so many social settings. This multidisciplinary enquiry brings together artists, art historians, art theorists and anthropologists working with a variety of media. Authors look beyond conventional ideas of the portrait to the wider cultural contexts, governmental practices and intimate experiences that shape relationships between persons and pictures. Their shared purpose centres on a commitment to understanding the power of images to draw people into their worlds. Imaging Identity tracks a fundamental symbiosis — to grapple with the workings of images is to understand something vital of what it is to be human.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis has introduced an infrastructure to share dynamic medical data between mixed health care providers in a secure way, which could benefit the health care system as a whole. The study results of the universally data sharing into a varied patient information system prototypes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Digital games offer enormous potential for learning and engagement in mathematics ideas and processes. This volume offers multidisciplinary perspectives—of educators, cognitive scientists, psychologists and sociologists—on how digital games influence the social activities and mathematical ideas of learners/gamers. Contributing authors identify opportunities for broadening current understandings of how mathematical ideas are fostered (and embedded) within digital game environments. In particular, the volume advocates for new and different ways of thinking about mathematics in our digital age—proposing that these mathematical ideas and numeracy practices are distinct from new literacies or multiliteracies. The authors acknowledge that the promise of digital games has not always been realised/fulfilled. There is emerging, and considerable, evidence to suggest that traditional discipline boundaries restrict opportunities for mathematical learning. Throughout the book, what constitutes mathematics learnings and pedagogy is contested. Multidisciplinary viewpoints are used to describe and understand the potential of digital games for learning mathematics and identify current tensions within the field.Mathematics learning is defined as being about problem solving; engagement in mathematical ideas and processes; and social engagement. The artefact, which is the game, shapes the ways in which the gamers engage with the social activity of gaming. In parallel, the book (as a textual artefact) will be supported by Springer’s online platform—allowing for video and digital communication (including links to relevant websites) to be used as supplementary material and establish a dynamic communication space.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Metadata that is associated with either an information system or an information object for purposes of description, administration, legal requirements, technical functionality, use and usage, and preservation, plays a critical role in ensuring the creation, management, preservation and use and re-use of trustworthymaterials, including records. Recordkeeping1 metadata, of which one key type is archival description, plays a particularly important role in documenting the reliability and authenticity of records and recordkeeping systemsas well as the various contexts (legal-administrative, provenancial, procedural, documentary, and technical) within which records are created and kept as they move across space and time. In the digital environment, metadata is also the means by which it is possible to identify how record components – those constituent aspects of a digital record that may be managed, stored and used separately by the creator or the preserver – can be reassembled to generate an authentic copy of a record or reformulated per a user’s request as a customized output package.Issues relating to the creation, capture, management and preservation of adequate metadata are, therefore, integral to any research study addressing the reliability and authenticity of digital entities, regardless of the community, sector or institution within which they are being created. The InterPARES 2 Description Cross-Domain Group (DCD) examined the conceptualization, definitions, roles, and current functionality of metadata and archival description in terms of requirements generated by InterPARES 12. Because of the needs to communicate the work of InterPARES in a meaningful way across not only other disciplines, but also different archival traditions; to interface with, evaluate and inform existing standards, practices and other research projects; and to ensure interoperability across the three focus areas of InterPARES2, the Description Cross-Domain also addressed its research goals with reference to wider thinking about and developments in recordkeeping and metadata. InterPARES2 addressed not only records, however, but a range of digital information objects (referred to as “entities” by InterPARES 2, but not to be confused with the term “entities” as used in metadata and database applications) that are the products and by-products of government, scientific and artistic activities that are carried out using dynamic, interactive or experiential digital systems. The nature of these entities was determined through a diplomatic analysis undertaken as part of extensive case studies of digital systems that were conducted by the InterPARES 2 Focus Groups. This diplomatic analysis established whether the entities identified during the case studies were records, non-records that nevertheless raised important concerns relating to reliability and authenticity, or “potential records.” To be determined to be records, the entities had to meet the criteria outlined by archival theory – they had to have a fixed documentary format and stable content. It was not sufficient that they be considered to be or treated as records by the creator. “Potential records” is a new construct that indicates that a digital system has the potential to create records upon demand, but does not actually fix and set aside records in the normal course of business. The work of the Description Cross-Domain Group, therefore, addresses the metadata needs for all three categories of entities.Finally, since “metadata” as a term is used today so ubiquitously and in so many different ways by different communities, that it is in peril of losing any specificity, part of the work of the DCD sought to name and type categories of metadata. It also addressed incentives for creators to generate appropriate metadata, as well as issues associated with the retention, maintenance and eventual disposition of the metadata that aggregates around digital entities over time.