959 resultados para Classical Peronism
Resumo:
In recent years, there has been a significant increase in the popularity of ontological analysis of conceptual modelling techniques. To date, related research explores the ontological deficiencies of classical techniques such as ER or UML modelling, as well as business process modelling techniques such as ARIS or even Web Services standards such as BPEL4WS, BPML, ebXML, BPSS and WSCI. While the ontologies that form the basis of these analyses are reasonably mature, it is the actual process of an ontological analysis that still lacks rigour. The current procedure is prone to individual interpretations and is one reason for criticism of the entire ontological analysis. This paper presents a procedural model for ontological analysis based on the use of meta models, multiple coders and metrics. The model is supported by examples from various ontological analyses.
Resumo:
Determining what consequences are likely to serve as effective punishment for any given behaviour is a complex task. This chapter focuses specifically on illegal road user behaviours and the mechanisms used to punish and deter them. Traffic law enforcement has traditionally used the threat and/or receipt of legal sanctions and penalties to deter illegal and risky behaviours. This process represents the use of positive punishment, one of the key behaviour modification mechanisms. Behaviour modification principles describe four types of reinforcers: positive and negative punishments and positive and negative reinforcements. The terms ‘positive’ and ‘negative’ are not used in an evaluative sense here. Rather, they represent the presence (positive) or absence (negative) of stimuli to promote behaviour change. Punishments aim to inhibit behaviour and reinforcements aim to encourage it. This chapter describes a variety of punishments and reinforcements that have been and could be used to modify illegal road user behaviours. In doing so, it draws on several theoretical perspectives that have defined behavioural reinforcement and punishment in different ways. Historically, the main theoretical approach used to deter risky road use has been classical deterrence theory which has focussed on the perceived certainty, severity and swiftness of penalties. Stafford and Warr (1993) extended the traditional deterrence principles to include the positive reinforcement concept of punishment avoidance. Evidence of the association between punishment avoidance experiences and behaviour has been established for a number of risky road user behaviours including drink driving, unlicensed driving, and speeding. We chose a novel way of assessing punishment avoidance by specifying two sub-constructs (detection evasion and punishment evasion). Another theorist, Akers, described the idea of competing reinforcers, termed differential reinforcement, within social learning theory (1977). Differential reinforcement describes a balance of reinforcements and punishments as influential on behaviour. This chapter describes comprehensive way of conceptualising a broad range of reinforcement and punishment concepts, consistent with Akers’ differential reinforcement concept, within a behaviour modification framework that incorporates deterrence principles. The efficacy of three theoretical perspectives to explain self-reported speeding among a sample of 833 Australian car drivers was examined. Results demonstrated that a broad range of variables predicted speeding including personal experiences of evading detection and punishment for speeding, intrinsic sensations, practical benefits expected from speeding, and an absence of punishing effects from being caught. Not surprisingly, being younger was also significantly related to more frequent speeding, although in a regression analysis, gender did not retain a significant influence once all punishment and reinforcement variables were entered. The implications for speed management, as well as road user behaviour modification more generally, are discussed in light of these findings. Overall, the findings reported in this chapter suggest that a more comprehensive approach is required to manage the behaviour of road users which does not rely solely on traditional legal penalties and sanctions.
Resumo:
Transport through crowded environments is often classified as anomalous, rather than classical, Fickian diffusion. Several studies have sought to describe such transport processes using either a continuous time random walk or fractional order differential equation. For both these models the transport is characterized by a parameter α, where α = 1 is associated with Fickian diffusion and α < 1 is associated with anomalous subdiffusion. Here, we simulate a single agent migrating through a crowded environment populated by impenetrable, immobile obstacles and estimate α from mean squared displacement data. We also simulate the transport of a population of such agents through a similar crowded environment and match averaged agent density profiles to the solution of a related fractional order differential equation to obtain an alternative estimate of α. We examine the relationship between our estimate of α and the properties of the obstacle field for both a single agent and a population of agents; we show that in both cases, α decreases as the obstacle density increases, and that the rate of decrease is greater for smaller obstacles. Our work suggests that it may be inappropriate to model transport through a crowded environment using widely reported approaches including power laws to describe the mean squared displacement and fractional order differential equations to represent the averaged agent density profiles.
Resumo:
The representation of vampires in horror movies and television programmes has changed considerably over the last two decades. No longer is the vampire portrayed simply as a monster or representation of death. Now, the vampire on our screen such as True Blood’s Bill Compton, or Twilight’s Edward Cullen, passes as human, chooses to make morally sound decisions, becomes an upstanding assimilated citizen, works in the community, and aspires to be a husband to mortal women. The success of recent series such as The Twilight Saga (2009, 2010, 2011, 2012), The Vampire Diaries (2009 - ) and True Blood (2008 - ) has popularised the idea of vampires who cling to remnants of their humanity (or memories of what it means to be human) and attempt to live as human, which builds upon similar – albeit embryonic – themes which emerged from the vampire sub-genre in the 1990s. Within these narratives, representations of the other have shifted from the traditional idea of the monster, to alternative and surprising loci. As this chapter argues, humans themselves, and the concept of the human body, now represent, in many instances, both abject and other. The chapter begins by considering the nature of the abject and otherness in relation to representations of classical vampires and how they have traditionally embodied the other. This provides a backdrop against which to examine the characteristics of the contemporary mainstreaming vampire ‘monster’. An examination of the broad thematic and representational shifts from other to mainstream vampire demonstrates how mainstream monsters are increasingly assimilating into mortal lifestyles with trappings that many viewers may find appealing. The same shifts in theme and representation also reveal that humans are frequently cast as mundane and unappealing in contemporary vampire narratives.
Resumo:
Basing signature schemes on strong lattice problems has been a long standing open issue. Today, two families of lattice-based signature schemes are known: the ones based on the hash-and-sign construction of Gentry et al.; and Lyubashevsky’s schemes, which are based on the Fiat-Shamir framework. In this paper we show for the first time how to adapt the schemes of Lyubashevsky to the ring signature setting. In particular we transform the scheme of ASIACRYPT 2009 into a ring signature scheme that provides strong properties of security under the random oracle model. Anonymity is ensured in the sense that signatures of different users are within negligible statistical distance even under full key exposure. In fact, the scheme satisfies a notion which is stronger than the classical full key exposure setting as even if the keypair of the signing user is adversarially chosen, the statistical distance between signatures of different users remains negligible. Considering unforgeability, the best lattice-based ring signature schemes provide either unforgeability against arbitrary chosen subring attacks or insider corruption in log-sized rings. In this paper we present two variants of our scheme. In the basic one, unforgeability is ensured in those two settings. Increasing signature and key sizes by a factor k (typically 80 − 100), we provide a variant in which unforgeability is ensured against insider corruption attacks for arbitrary rings. The technique used is pretty general and can be adapted to other existing schemes.
Resumo:
As all-atom molecular dynamics method is limited by its enormous computational cost, various coarse-grained strategies have been developed to extend the length scale of soft matters in the modeling of mechanical behaviors. However, the classical thermostat algorithm in highly coarse-grained molecular dynamics method would underestimate the thermodynamic behaviors of soft matters (e.g. microfilaments in cells), which can weaken the ability of materials to overcome local energy traps in granular modeling. Based on all-atom molecular dynamics modeling of microfilament fragments (G-actin clusters), a new stochastic thermostat algorithm is developed to retain the representation of thermodynamic properties of microfilaments at extra coarse-grained level. The accuracy of this stochastic thermostat algorithm is validated by all-atom MD simulation. This new stochastic thermostat algorithm provides an efficient way to investigate the thermomechanical properties of large-scale soft matters.
Resumo:
Commercially available generic Superglue (cyanoacrylate glue) can be used as an alternative mounting medium for stained resin-embedded semithin sections. It is colourless and contains a volatile, quick-setting solvent that produces permanent mounts of semithin sections for immediate inspection under the light microscope. Here, we compare the use of cyanoacrylate glue for mounting semithin sections with classical dibutyl phthalate xylene (DPX) in terms of practical usefulness, effectiveness and the quality of the final microscopic image.
Resumo:
MapReduce is a computation model for processing large data sets in parallel on large clusters of machines, in a reliable, fault-tolerant manner. A MapReduce computation is broken down into a number of map tasks and reduce tasks, which are performed by so called mappers and reducers, respectively. The placement of the mappers and reducers on the machines directly affects the performance and cost of the MapReduce computation in cloud computing. From the computational point of view, the mappers/reducers placement problem is a generation of the classical bin packing problem, which is NP-complete. Thus, in this paper we propose a new heuristic algorithm for the mappers/reducers placement problem in cloud computing and evaluate it by comparing with other several heuristics on solution quality and computation time by solving a set of test problems with various characteristics. The computational results show that our heuristic algorithm is much more efficient than the other heuristics and it can obtain a better solution in a reasonable time. Furthermore, we verify the effectiveness of our heuristic algorithm by comparing the mapper/reducer placement for a benchmark problem generated by our heuristic algorithm with a conventional mapper/reducer placement which puts a fixed number of mapper/reducer on each machine. The comparison results show that the computation using our mapper/reducer placement is much cheaper than the computation using the conventional placement while still satisfying the computation deadline.
Resumo:
This paper explores how the amalgamated wisdom of East and West can instigate a wisdombased renaissance of humanistic epistemology (Rooney & McKenna, 2005) to provide a platform of harmony in managing knowledge-worker productivity, one of the biggest management challenges of the 21st century (Drucker, 1999). The paper invites further discussions from the social and business research communities on the significance of "interpretation realism" technique in comprehending philosophies of Lao Tzu Confucius and Sun Tzu (Lao/Confucius/Sun] written in "Classical Chinese." This paper concludes with a call to build prudent, responsible practices in management which affects the daily lives of many (Rooney & McKenna, 2005) in today's knowledgebased economy. Interpretation Realism will be applied to an analysis of three Chinese classics of Lao/Confucius/Sun which have been embodied in the Chinese culture for over 2,500 years. Comprehending Lao/Confucius/Sun's philosophies is the first step towards understanding Classical Chinese culture. However, interpreting Chinese subtlety in language and the yin and yang circular synthesis in their mode of thinking is very different to understanding Western thought with its open communication and its linear, analytical pattern of Aristotelian/Platonic wisdom (Zuo, 2012). Furthermore, Eastern ways of communication are relatively indirect and mediatory in culture. Western ways of communication are relatively direct and litigious in culture (Goh, 2002). Furthermore, Lao/Confucius/Sun's philosophies are difficult to comprehend as there are four written Chinese formats and over 250 dialects: Pre-classical Chinese Classical Chinese Literary Chinese and modern Vernacular Chinese Because Classical Chinese is poetic, comprehension requires a mixed approach of interpretation realism combining logical reasoning behind "word splitting word occurrences", "empathetic metaphor" and "poetic appreciation of word.
Resumo:
The bed nucleus of the stria terminalis (BNST) is believed to be a critical relay between the central nucleus of the amygdala (CE) and the paraventricular nucleus of the hypothalamus in the control of hypothalamic–pituitary– adrenal (HPA) responses elicited by conditioned fear stimuli. If correct, lesions of CE or BNST should block expression of HPA responses elicited by either a specific conditioned fear cue or a conditioned context. To test this, rats were subjected to cued (tone) or contextual classical fear conditioning. Two days later, electrolytic or sham lesions were placed in CE or BNST. After 5 days, the rats were tested for both behavioral (freezing) and neuroendocrine (corticosterone) responses to tone or contextual cues. CE lesions attenuated conditioned freezing and corticosterone responses to both tone and con- text. In contrast, BNST lesions attenuated these responses to contextual but not tone stimuli. These results suggest CE is indeed an essential output of the amygdala for the expres- sion of conditioned fear responses, including HPA re- sponses, regardless of the nature of the conditioned stimu- lus. However, because lesions of BNST only affected behav- ioral and endocrine responses to contextual stimuli, the results do not support the notion that BNST is critical for HPA responses elicited by conditioned fear stimuli in general. Instead, the BNST may be essential specifically for contex- tual conditioned fear responses, including both behavioral and HPA responses, by virtue of its connections with the hippocampus, a structure essential to contextual condition- ing. The results are also not consistent with the hypothesis that BNST is only involved in unconditioned aspects of fear and anxiety.
Resumo:
Due to knowledge gaps in relation to urban stormwater quality processes, an in-depth understanding of model uncertainty can enhance decision making. Uncertainty in stormwater quality models can originate from a range of sources such as the complexity of urban rainfall-runoff-stormwater pollutant processes and the paucity of observed data. Unfortunately, studies relating to epistemic uncertainty, which arises from the simplification of reality are limited and often deemed mostly unquantifiable. This paper presents a statistical modelling framework for ascertaining epistemic uncertainty associated with pollutant wash-off under a regression modelling paradigm using Ordinary Least Squares Regression (OLSR) and Weighted Least Squares Regression (WLSR) methods with a Bayesian/Gibbs sampling statistical approach. The study results confirmed that WLSR assuming probability distributed data provides more realistic uncertainty estimates of the observed and predicted wash-off values compared to OLSR modelling. It was also noted that the Bayesian/Gibbs sampling approach is superior compared to the most commonly adopted classical statistical and deterministic approaches commonly used in water quality modelling. The study outcomes confirmed that the predication error associated with wash-off replication is relatively higher due to limited data availability. The uncertainty analysis also highlighted the variability of the wash-off modelling coefficient k as a function of complex physical processes, which is primarily influenced by surface characteristics and rainfall intensity.
Resumo:
MapReduce frameworks such as Hadoop are well suited to handling large sets of data which can be processed separately and independently, with canonical applications in information retrieval and sales record analysis. Rapid advances in sequencing technology have ensured an explosion in the availability of genomic data, with a consequent rise in the importance of large scale comparative genomics, often involving operations and data relationships which deviate from the classical Map Reduce structure. This work examines the application of Hadoop to patterns of this nature, using as our focus a wellestablished workflow for identifying promoters - binding sites for regulatory proteins - Across multiple gene regions and organisms, coupled with the unifying step of assembling these results into a consensus sequence. Our approach demonstrates the utility of Hadoop for problems of this nature, showing how the tyranny of the "dominant decomposition" can be at least partially overcome. It also demonstrates how load balance and the granularity of parallelism can be optimized by pre-processing that splits and reorganizes input files, allowing a wide range of related problems to be brought under the same computational umbrella.
Resumo:
Distributed generation (DG) resources are commonly used in the electric systems to obtain minimum line losses, as one of the benefits of DG, in radial distribution systems. Studies have shown the importance of appropriate selection of location and size of DGs. This paper proposes an analytical method for solving optimal distributed generation placement (ODGP) problem to minimize line losses in radial distribution systems using loss sensitivity factor (LSF) based on bus-injection to branch-current (BIBC) matrix. The proposed method is formulated and tested on 12 and 34 bus radial distribution systems. The classical grid search algorithm based on successive load flows is employed to validate the results. The main advantages of the proposed method as compared with the other conventional methods are the robustness and no need to calculate and invert large admittance or Jacobian matrices. Therefore, the simulation time and the amount of computer memory, required for processing data especially for the large systems, decreases.
Resumo:
Transport processes within heterogeneous media may exhibit non- classical diffusion or dispersion which is not adequately described by the classical theory of Brownian motion and Fick’s law. We consider a space-fractional advection-dispersion equation based on a fractional Fick’s law. Zhang et al. [Water Resources Research, 43(5)(2007)] considered such an equation with variable coefficients, which they dis- cretised using the finite difference method proposed by Meerschaert and Tadjeran [Journal of Computational and Applied Mathematics, 172(1):65-77 (2004)]. For this method the presence of variable coef- ficients necessitates applying the product rule before discretising the Riemann–Liouville fractional derivatives using standard and shifted Gru ̈nwald formulas, depending on the fractional order. As an alternative, we propose using a finite volume method that deals directly with the equation in conservative form. Fractionally-shifted Gru ̈nwald formulas are used to discretise the Riemann–Liouville fractional derivatives at control volume faces, eliminating the need for product rule expansions. We compare the two methods for several case studies, highlighting the convenience of the finite volume approach.
Resumo:
We propose to use a simple and effective way to achieve secure quantum direct secret sharing. The proposed scheme uses the properties of fountain codes to allow a realization of the physical conditions necessary for the implementation of no-cloning principle for eavesdropping-check and authentication. In our scheme, to achieve a variety of security purposes, nonorthogonal state particles are inserted in the transmitted sequence carrying the secret shares to disorder it. However, the positions of the inserted nonorthogonal state particles are not announced directly, but are obtained by sending degrees and positions of a sequence that are pre-shared between Alice and each Bob. Moreover, they can confirm that whether there exists an eavesdropper without exchanging classical messages. Most importantly, without knowing the positions of the inserted nonorthogonal state particles and the sequence constituted by the first particles from every EPR pair, the proposed scheme is shown to be secure.