888 resultados para limits
Resumo:
Surveying 1,700 journalists from seventeen countries, this study investigates perceived influences on news work. Analysis reveals a dimensional structure of six distinct domains—political, economic, organizational, professional, and procedural influences, as well as reference groups. Across countries, these six dimensions build up a hierarchical structure where organizational, professional, and procedural influences are perceived as more powerful limits to journalists' work than political and economic influences.
Resumo:
This chapter will begin with a brief summary of some recent research in the field of comparative penology. This work will be examined to explore the benefits, difficulties and limits of attempting to link criminal justice issues to types of advanced democratic polities, with particular emphasis on political economies. This stream of comparative penology examines data such as imprisonment rates and levels of punitiveness in different countries, before drawing conclusions based on the patterns which seem to emerge. Foremost among these is that the high imprisoning countries tend to be the advanced western liberal democracies which have gone furthest in adopting neoliberal economic and social policies, as against the lower imprisonment rates of social democracies, which variably have attempted to temper free-market economic policies in various ways. Such work brings both social democracy and neoliberalism into focus as issues for, or subjects of, criminology. Not in the sense of new ‘brands’ of criminology but rather as an examination of the connections between the political projects of social democracy and neoliberalism, and issues of crime and criminal justice. In the new comparative penology, social democracy and neoliberalism are cast in opposition, simultaneously raising the questions of to what extent and how adequately both social democracy and neoliberalism have been constituted as subjects in criminology and whether dichotomy is the only available trope of analysis?
Resumo:
This paper presents an optimisation algorithm to maximize the loadability of single wire earth return (SWER) by minimizing the cost of batteries and regulators considering the voltage constraints and thermal limits. This algorithm, that finds the optimum location of batteries and regulators, uses hybrid discrete particle swarm optimization and mutation (DPSO + Mutation). The simulation results on realistic highly loaded SWER network show the effectiveness of using battery to improve the loadability of SWER network in a cost-effective way. In this case, while only 61% of peak load can be supplied without violating the constraints by existing network, the loadability of the network is increased to peak load by utilizing two battery sites which are located optimally. That is, in a SWER system like the studied one, each installed kVA of batteries, optimally located, supports a loadability increase as 2 kVA.
Resumo:
Many model-based investigation techniques, such as sensitivity analysis, optimization, and statistical inference, require a large number of model evaluations to be performed at different input and/or parameter values. This limits the application of these techniques to models that can be implemented in computationally efficient computer codes. Emulators, by providing efficient interpolation between outputs of deterministic simulation models, can considerably extend the field of applicability of such computationally demanding techniques. So far, the dominant techniques for developing emulators have been priors in the form of Gaussian stochastic processes (GASP) that were conditioned with a design data set of inputs and corresponding model outputs. In the context of dynamic models, this approach has two essential disadvantages: (i) these emulators do not consider our knowledge of the structure of the model, and (ii) they run into numerical difficulties if there are a large number of closely spaced input points as is often the case in the time dimension of dynamic models. To address both of these problems, a new concept of developing emulators for dynamic models is proposed. This concept is based on a prior that combines a simplified linear state space model of the temporal evolution of the dynamic model with Gaussian stochastic processes for the innovation terms as functions of model parameters and/or inputs. These innovation terms are intended to correct the error of the linear model at each output step. Conditioning this prior to the design data set is done by Kalman smoothing. This leads to an efficient emulator that, due to the consideration of our knowledge about dominant mechanisms built into the simulation model, can be expected to outperform purely statistical emulators at least in cases in which the design data set is small. The feasibility and potential difficulties of the proposed approach are demonstrated by the application to a simple hydrological model.
Resumo:
Sequential Design Molecular Weight Range Functional Monomers: Possibilities, Limits, and Challenges Block Copolymers: Combinations, Block Lengths, and Purities Modular Design End-Group Chemistry Ligation Protocols Conclusions
Resumo:
Recently, botnet, a network of compromised computers, has been recognized as the biggest threat to the Internet. The bots in a botnet communicate with the botnet owner via a communication channel called Command and Control (C & C) channel. There are three main C & C channels: Internet Relay Chat (IRC), Peer-to-Peer (P2P) and web-based protocols. By exploiting the flexibility of the Web 2.0 technology, the web-based botnet has reached a new level of sophistication. In August 2009, such botnet was found on Twitter, one of the most popular Web 2.0 services. In this paper, we will describe a new type of botnet that uses Web 2.0 service as a C & C channel and a temporary storage for their stolen information. We will then propose a novel approach to thwart this type of attack. Our method applies a unique identifier of the computer, an encryption algorithm with session keys and a CAPTCHA verification.
Resumo:
Australia’s building stock includes many older commercial buildings with numerous factors that impact energy performance and indoor environment quality. The built environment industry has generally focused heavily on improving physical building design elements for greater energy efficiency (such as retrofits and environmental upgrades), however there are noticeable ‘upper limits’ to performance improvements in these areas. To achieve a stepchange improvement in building performance, the authors propose that additional components need to be addressed in a whole of building approach, including the way building design elements are managed and the level of stakeholder engagement between owners, tenants and building managers. This paper focuses on the opportunities provided by this whole-of-building approach, presenting the findings of a research project undertaken through the Sustainable Built Environment National Research Centre (SBEnrc) in Australia. Researchers worked with a number of industry partners over two years to investigate issues facing stakeholders at base building and tenancy levels, and the barriers to improving building performance. Through a mixed-method, industry-led research approach, five ‘nodes’ were identified in whole-of-building performance evaluation, each with interlinking and overlapping complexities that can influence performance. The nodes cover building management, occupant experience, indoor environment quality, agreements and culture, and design elements. This paper outlines the development and testing of these nodes and their interactions, and the resultant multi-nodal tool, called the ‘Performance Nexus’ tool. The tool is intended to be of most benefit in evaluating opportunities for performance improvement in the vast number of existing low-performing building stock.
Resumo:
Background. Volitional risky driving behaviours such as drink- and drug-driving (i.e. substance-impaired driving) and speeding contribute to the overrepresentation of young novice drivers in road crash fatalities, and crash risk is greatest during the first year of independent driving in particular. Aims. To explore the: 1) self-reported compliance of drivers with road rules regarding substance-impaired driving and other risky driving behaviours (e.g., speeding, driving while tired), one year after progression from a Learner to a Provisional (intermediate) licence; and 2) interrelationships between substance-impaired driving and other risky driving behaviours (e.g., crashes, offences, and Police avoidance). Methods. Drivers (n = 1,076; 319 males) aged 18-20 years were surveyed regarding their sociodemographics (age, gender) and self-reported driving behaviours including crashes, offences, Police avoidance, and driving intentions. Results. A relatively small proportion of participants reported driving after taking drugs (6.3% of males, 1.3% of females) and drinking alcohol (18.5% of males, 11.8% of females). In comparison, a considerable proportion of participants reported at least occasionally exceeding speed limits (86.7% of novices), and risky behaviours like driving when tired (83.6% of novices). Substance-impaired driving was associated with avoiding Police, speeding, risky driving intentions, and self-reported crashes and offences. Forty-three percent of respondents who drove after taking drugs also reported alcohol-impaired driving. Discussion and Conclusions. Behaviours of concern include drink driving, speeding, novice driving errors such as misjudging the speed of oncoming vehicles, violations of graduated driver licensing passenger restrictions, driving tired, driving faster if in a bad mood, and active punishment avoidance. Given the interrelationships between the risky driving behaviours, a deeper understanding of influential factors is required to inform targeted and general countermeasure implementation and evaluation during this critical driving period. Notwithstanding this, a combination of enforcement, education, and engineering efforts appear necessary to improve the road safety of the young novice driver, and for the drink-driving young novice driver in particular.
Resumo:
Motion control systems have a significant impact on the performance of ships and marine structures allowing them to perform tasks in severe sea states and during long periods of time. Ships are designed to operate with adequate reliability and economy, and in order to achieve this, it is essential to control the motion. For each type of ship and operation performed (transit, landing a helicopter, fishing, deploying and recovering loads, etc.), there are not only desired motion settings, but also limits on the acceptable (undesired) motion induced by the environment. The task of a ship motion control system is therefore to act on the ship so it follows the desired motion as closely as possible. This book provides an introduction to the field of ship motion control by studying the control system designs for course-keeping autopilots with rudder roll stabilisation and integrated rudder-fin roll stabilisation. These particular designs provide a good overview of the difficulties encountered by designers of ship motion control systems and, therefore, serve well as an example driven introduction to the field. The idea of combining the control design of autopilots with that of fin roll stabilisers, and the idea of using rudder induced roll motion as a sole source of roll stabilisation seems to have emerged in the late 1960s. Since that time, these control designs have been the subject of continuous and ongoing research. This ongoing interest is a consequence of the significant bearing that the control strategy has on the performance and the issues associated with control system design. The challenges of these designs lie in devising a control strategy to address the following issues: underactuation, disturbance rejection with a non minimum phase system, input and output constraints, model uncertainty, and large unmeasured stochastic disturbances. To date, the majority of the work reported in the literature has focused strongly on some of the design issues whereas the remaining issues have been addressed using ad hoc approaches. This has provided an additional motivation for revisiting these control designs and looking at the benefits of applying a contemporary design framework, which can potentially address the majority of the design issues.
Resumo:
Preserving the integrity of the skin's outermost layer (the epidermis) is vital for humans to thrive in hostile surroundings. Covering the entire body, the epidermis forms a thin but impenetrable cellular cordon that repels external assaults and blocks escape of water and electrolytes from within. This structure exists in a perpetual state of regeneration where the production of new cellular subunits at the base of the epidermis is offset by the release of terminally differentiated corneocytes from the surface. It is becoming increasingly clear that proteases hold vital roles in assembling and maintaining the epidermal barrier. More than 30 proteases are expressed by keratinocytes or infiltrating immune cells and the activity of each must be maintained within narrow limits and confined to the correct time and place. Accordingly, over- or under-exertion of proteolytic activity is a common factor in a multitude of skin disorders that range in severity from relatively mild to life-threatening. This review explores the current state of knowledge on the involvement of proteases in skin diseases and the latest findings from proteomic and transcriptomic studies focused on uncovering novel (patho)physiological roles for these enzymes.
Resumo:
Security models for two-party authenticated key exchange (AKE) protocols have developed over time to provide security even when the adversary learns certain secret keys. In this work, we advance the modelling of AKE protocols by considering more granular, continuous leakage of long-term secrets of protocol participants: the adversary can adaptively request arbitrary leakage of long-term secrets even after the test session is activated, with limits on the amount of leakage per query but no bounds on the total leakage. We present a security model supporting continuous leakage even when the adversary learns certain ephemeral secrets or session keys, and give a generic construction of a two-pass leakage-resilient key exchange protocol that is secure in the model; our protocol achieves continuous, after-the-fact leakage resilience with not much more cost than a previous protocol with only bounded, non-after-the-fact leakage.
Resumo:
Biodiesel derived from microalgae is one of a suite of potential solutions to meet the increasing demand for a renewable, carbon-neutral energy source. However, there are numerous challenges that must be addressed before algae biodiesel can become commercially viable. These challenges include the economic feasibility of harvesting and dewatering the biomass and the extraction of lipids and their conversion into biodiesel. Therefore, it is essential to find a suitable extraction process given these processes presently contribute significantly to the total production costs which, at this stage, inhibit the ability of biodiesel to compete financially with petroleum diesel. This study focuses on pilot-scale (100 kg dried microalgae) solvent extraction of lipids from microalgae and subsequent transesterification to biodiesel. Three different solvents (hexane, isopropanol (IPA) and hexane + IPA (1:1)) were used with two different extraction methods (static and Soxhlet) at bench-scale to find the most suitable solvent extraction process for the pilot-scale. The Soxhlet method extracted only 4.2% more lipid compared to the static method. However, the fatty acid profiles of different extraction methods with different solvents are similar, suggesting that none of the solvents or extraction processes were biased for extraction of particular fatty acids. Considering the cost and availability of the solvents, hexane was chosen for pilot-scale extraction using static extraction. At pilot-scale the lipid yield was found to be 20.3% of total biomass which is 2.5% less than from bench scale. Extracted fatty acids were dominated by polyunsaturated fatty acids (PUFAs) (68.94±0.17%) including 47.7±0.43 and 17.86±0.42% being docosahexaenoic acid (DHA) (C22:6) and docosapentaenoic acid (DPA) (C22:5, ω-3), respectively. These high amounts of long chain poly unsaturated fatty acids are unique to some marine microalgae and protists and vary with environmental conditions, culture age and nutrient status, as well as with cultivation process. Calculated physical and chemical properties of density, viscosity of transesterified fatty acid methyl esters (FAMEs) were within the limits of the biodiesel standard specifications as per ASTM D6751-2012 and EN 14214. The calculated cetane number was, however, significantly lower (17.8~18.6) compared to ASTM D6751-2012 or EN 14214-specified minimal requirements. We conclude that the obtained microalgal biodiesel would likely only be suitable for blending with petroleum diesel to a maximum of 5 to 20%.
Resumo:
The swine influenza (H1N1) outbreak in 2009 highlighted the ethical and legal pressures facing general practitioners and health workers in emergency departments in determining the nature and limits of their obligations to their patients and the public. Health workers require guidance on the multiple, overlapping, and at times conflicting legal and ethical duties owed to patients and prospective patients, employers and fellow health workers, and their families. Existing sources of advice on these issues in Australia, by way of statements of medical ethics and other sources of advice, are shown to be in need of further amplification if health workers are to be provided with the certainty and guidance required. Given the complexity of the issues, Australia would therefore benefit from more extensive consultation with the variety of stakeholders involved in these questions if pandemic plans are to smoothly deal with future crises in an ethically and legally sound manner.
Resumo:
‘Dark Cartographies’ is a slowly evolving meditation upon seasonal change, life after light and the occluding shadows of human influence. Through creating experiences of the many ‘times of a night’ the work allows participants to experience deep engagement with rich spectras of hidden place and sound. By amplifying and shining light upon a myriad of lives lived in blackness, ‘Dark Cartographies’ tempts us to re-understand seasonal change as actively-embodied temporality, inflected by our climate-changing disturbances. ‘Dark Cartographies’ uses custom interactive systems, illusionary techniques and real time spatial audio that draw upon a rich array of media, including seasonal, nocturnal field recordings sourced in the Far North Queensland region and detailed observations of foliage & flowering phases. By drawing inspiration from the subtle transitions between what Europeans named ‘Summer’ and ‘Autumn’, and by including the body and its temporal disturbances within the work, ‘Dark Cartographies’ creates compellingly immersive environments that wrap us in atmospheres beyond sight and hearing. ‘Dark Cartographies’ is a dynamic new installation directed & choreographed by environmental cycles; alluding to a new framework for making works that we call ‘Seasonal’. This powerful, responsive & experiential work draws attention to that which will disappear when biodiverse worlds have descended into an era of permanent darkness – an ‘extinction of human experience’. By tapping into the deeply interlocking seasonal cycles of environments that are themselves intimately linked with social, geographical & political concerns, participating audiences are therefore challenged to see the night, their locality & ecologies in new ways through extending their personal limits of perception, imagery & comprehension.
Resumo:
The operation of the law rests on the selection of an account of the facts. Whether this involves prediction or postdiction, it is not possible to achieve certainty. Any attempt to model the operation of the law completely will therefore raise questions of how to model the process of proof. In the selection of a model a crucial question will be whether the model is to be used normatively or descriptively. Focussing on postdiction, this paper presents and contrasts the mathematical model with the story model. The former carries the normative stamp of scientific approval, whereas the latter has been developed by experimental psychologists to describe how humans reason. Neil Cohen's attempt to use a mathematical model descriptively provides an illustration of the dangers in not clearly setting this parameter of the modelling process. It should be kept in mind that the labels 'normative' and 'descriptive' are not eternal. The mathematical model has its normative limits, beyond which we may need to critically assess models with descriptive origins.