982 resultados para Predictive technology
Resumo:
Purpose The object of this paper is to examine whether the improvements in technology that enhance community understanding of the frequency and severity of natural hazards also increased the risk of potential liability of planning authorities in negligence. In Australia, the National Strategy imposes a resilience based approach to disaster management and stresses that responsible land use planning can reduce or prevent the impact of natural hazards upon communities. Design/methodology/approach This paper analyses how the principles of negligence allocate responsibility for loss suffered by a landowner in a hazard prone area between the landowner and local government. Findings The analysis in this paper concludes that despite being able to establish a causal link between the loss suffered by a landowner and the approval of a local authority to build in a hazard prone area, it would be in the rarest of circumstances a negligence action may be proven. Research limitations/implications The focus of this paper is on planning policies and land development, not on the negligent provision of advice or information by the local authority. Practical implications This paper identifies the issues a landowner may face when seeking compensation from a local authority for loss suffered due to the occurrence of a natural hazard known or predicted to be possible in the area. Originality/value The paper establishes that as risk managers, local authorities must place reliance upon scientific modelling and predictive technology when determining planning processes in order to fulfil their responsibilities under the National Strategy and to limit any possible liability in negligence.
Resumo:
The usual way of modeling variability using threshold voltage shift and drain current amplification is becoming inaccurate as new sources of variability appear in sub-22nm devices. In this work we apply the four-injector approach for variability modeling to the simulation of SRAMs with predictive technology models from 20nm down to 7nm nodes. We show that the SRAMs, designed following ITRS roadmap, present stability metrics higher by at least 20% compared to a classical variability modeling approach. Speed estimation is also pessimistic, whereas leakage is underestimated if sub-threshold slope and DIBL mismatch and their correlations with threshold voltage are not considered.
Resumo:
Mode of access: Internet.
Resumo:
The past decade has seen an increase in the occurrence of natural hazards and the experience in Australia has led to a reconsideration of the planning for natural hazards by government and to the adoption of a whole-of-nation resilience-based approach to disaster management. A key component of creating community resilience is the integration of disaster management with government and community strategic planning in relation to the social, built, economic and natural environments. Joint responsibility of government and the community for ‘land use planning systems and building control arrangements [which] reduce, as far as is practicable, community exposure to unreasonable risks from known hazards’, is a critical element of a resilient community. As the responsibility for the implementation of land use planning policies in Australia is generally with local governments, this paper will examine whether, in light of improved predictive technology, the failure of a local government to adequately foresee and make provision for a known hazard will give rise to liability for damage or loss of property caused by that hazard.
Resumo:
In this paper analytical expressions for optimal Vdd and Vth to minimize energy for a given speed constraint are derived. These expressions are based on the EKV model for transistors and are valid in both strong inversion and sub threshold regions. The effect of gate leakage on the optimal Vdd and Vth is analyzed. A new gradient based algorithm for controlling Vdd and Vth based on delay and power monitoring results is proposed. A Vdd-Vth controller which uses the algorithm to dynamically control the supply and threshold voltage of a representative logic block (sum of absolute difference computation of an MPEG decoder) is designed. Simulation results using 65 nm predictive technology models are given.
Resumo:
Strained fin is one of the techniques used to improve the devices as their size keeps reducing in new nanoscale nodes. In this paper, we use a predictive technology of 14 nm where pMOS mobility is significantly improved when those devices are built on top of long, uncut fins, while nMOS devices present the opposite behavior due to the combination of strains. We explore the possibility of boosting circuit performance in repetitive structures where long uncut fins can be exploited to increase fin strain impact. In particular, pMOS pass-gates are used in 6T complementary SRAM cells (CSRAM) with reinforced pull-ups. Those cells are simulated under process variability and compared to the regular SRAM. We show that when layout dependent effects are considered the CSRAM design provides 10% to 40% faster access time while keeping the same area, power, and stability than a regular 6T SRAM cell. The conclusions also apply to 8T SRAM cells. The CSRAM cell also presents increased reliability in technologies whose nMOS devices have more mismatch than pMOS transistors.
Resumo:
The human-technology nexus is a strong focus of Information Systems (IS) research; however, very few studies have explored this phenomenon in anaesthesia. Anaesthesia has a long history of adoption of technological artifacts, ranging from early apparatus to present-day information systems such as electronic monitoring and pulse oximetry. This prevalence of technology in modern anaesthesia and the rich human-technology relationship provides a fertile empirical setting for IS research. This study employed a grounded theory approach that began with a broad initial guiding question and, through simultaneous data collection and analysis, uncovered a core category of technology appropriation. This emergent basic social process captures a central activity of anaesthestists and is supported by three major concepts: knowledge-directed medicine, complementary artifacts and culture of anaesthesia. The outcomes of this study are: (1) a substantive theory that integrates the aforementioned concepts and pertains to the research setting of anaesthesia and (2) a formal theory, which further develops the core category of appropriation from anaesthesia-specific to a broader, more general perspective. These outcomes fulfill the objective of a grounded theory study, being the formation of theory that describes and explains observed patterns in the empirical field. In generalizing the notion of appropriation, the formal theory is developed using the theories of Karl Marx. This Marxian model of technology appropriation is a three-tiered theoretical lens that examines appropriation behaviours at a highly abstract level, connecting the stages of natural, species and social being to the transition of a technology-as-artifact to a technology-in-use via the processes of perception, orientation and realization. The contributions of this research are two-fold: (1) the substantive model contributes to practice by providing a model that describes and explains the human-technology nexus in anaesthesia, and thereby offers potential predictive capabilities for designers and administrators to optimize future appropriations of new anaesthetic technological artifacts; and (2) the formal model contributes to research by drawing attention to the philosophical foundations of appropriation in the work of Marx, and subsequently expanding the current understanding of contemporary IS theories of adoption and appropriation.
Resumo:
In a seminal data mining article, Leo Breiman [1] argued that to develop effective predictive classification and regression models, we need to move away from the sole dependency on statistical algorithms and embrace a wider toolkit of modeling algorithms that include data mining procedures. Nevertheless, many researchers still rely solely on statistical procedures when undertaking data modeling tasks; the sole reliance on these procedures has lead to the development of irrelevant theory and questionable research conclusions ([1], p.199). We will outline initiatives that the HPC & Research Support group is undertaking to engage researchers with data mining tools and techniques; including a new range of seminars, workshops, and one-on-one consultations covering data mining algorithms, the relationship between data mining and the research cycle, and limitations and problems with these new algorithms. Organisational limitations and restrictions to these initiatives are also discussed.
Resumo:
A trend in design and implementation of modern industrial automation systems is to integrate computing, communication and control into a unified framework at different levels of machine/factory operations and information processing. These distributed control systems are referred to as networked control systems (NCSs). They are composed of sensors, actuators, and controllers interconnected over communication networks. As most of communication networks are not designed for NCS applications, the communication requirements of NCSs may be not satisfied. For example, traditional control systems require the data to be accurate, timely and lossless. However, because of random transmission delays and packet losses, the control performance of a control system may be badly deteriorated, and the control system rendered unstable. The main challenge of NCS design is to both maintain and improve stable control performance of an NCS. To achieve this, communication and control methodologies have to be designed. In recent decades, Ethernet and 802.11 networks have been introduced in control networks and have even replaced traditional fieldbus productions in some real-time control applications, because of their high bandwidth and good interoperability. As Ethernet and 802.11 networks are not designed for distributed control applications, two aspects of NCS research need to be addressed to make these communication networks suitable for control systems in industrial environments. From the perspective of networking, communication protocols need to be designed to satisfy communication requirements for NCSs such as real-time communication and high-precision clock consistency requirements. From the perspective of control, methods to compensate for network-induced delays and packet losses are important for NCS design. To make Ethernet-based and 802.11 networks suitable for distributed control applications, this thesis develops a high-precision relative clock synchronisation protocol and an analytical model for analysing the real-time performance of 802.11 networks, and designs a new predictive compensation method. Firstly, a hybrid NCS simulation environment based on the NS-2 simulator is designed and implemented. Secondly, a high-precision relative clock synchronization protocol is designed and implemented. Thirdly, transmission delays in 802.11 networks for soft-real-time control applications are modeled by use of a Markov chain model in which real-time Quality-of- Service parameters are analysed under a periodic traffic pattern. By using a Markov chain model, we can accurately model the tradeoff between real-time performance and throughput performance. Furthermore, a cross-layer optimisation scheme, featuring application-layer flow rate adaptation, is designed to achieve the tradeoff between certain real-time and throughput performance characteristics in a typical NCS scenario with wireless local area network. Fourthly, as a co-design approach for both a network and a controller, a new predictive compensation method for variable delay and packet loss in NCSs is designed, where simultaneous end-to-end delays and packet losses during packet transmissions from sensors to actuators is tackled. The effectiveness of the proposed predictive compensation approach is demonstrated using our hybrid NCS simulation environment.