888 resultados para arguments by definition
Resumo:
Les études rhétoriques ont documenté la pertinence de la rhétorique présidentielle et le pouvoir du président de définir les enjeux publics par le discours. Cette recherche porte sur les pratiques rhétoriques par lesquelles l'ancien président mexicain Calderón a défini la lutte contre la drogue qui a caractérisé son administration. Je soutiens que Calderón a avancé une définition du problème de la drogue par des pratiques de définition telles que l'association, la dissociation et les symboles de condensation. Mon analyse 1) identifie les pratiques rhétoriques de définition qui ont caractérisé la lutte à la drogue de Calderón; 2) examine les implications de ces pratiques; et 3) aborde les limites auxquelles les politiciens font face en tentant de modifier des définitions préalablement avancées. En conclusion, j’explique comment les métaphores et les pratiques de définition de Calderón ont ouvert un espace rhétorique où les droits humains pouvaient être révoqués et la violence encouragée.
Resumo:
The quantification of CO2 emissions from anthropogenic land use and land use change (eLUC) is essential to understand the drivers of the atmospheric CO2 increase and to inform climate change mitigation policy. Reported values in synthesis reports are commonly derived from different approaches (observation-driven bookkeeping and process-modelling) but recent work has emphasized that inconsistencies between methods may imply substantial differences in eLUC estimates. However, a consistent quantification is lacking and no concise modelling protocol for the separation of primary and secondary components of eLUC has been established. Here, we review differences of eLUC quantification methods and apply an Earth System Model (ESM) of Intermediate Complexity to quantify them. We find that the magnitude of effects due to merely conceptual differences between ESM and offline vegetation model-based quantifications is ~ 20 % for today. Under a future business-as-usual scenario, differences tend to increase further due to slowing land conversion rates and an increasing impact of altered environmental conditions on land-atmosphere fluxes. We establish how coupled Earth System Models may be applied to separate secondary component fluxes of eLUC arising from the replacement of potential C sinks/sources and the land use feedback and show that secondary fluxes derived from offline vegetation models are conceptually and quantitatively not identical to either, nor their sum. Therefore, we argue that synthesis studies should resort to the "least common denominator" of different methods, following the bookkeeping approach where only primary land use emissions are quantified under the assumption of constant environmental boundary conditions.
Resumo:
"First published in the Christian herald."
Resumo:
"First published in the Christian herald."
Resumo:
The book within which this chapter appears is published as a research reference book (not a coursework textbook) on Management Information Systems (MIS) for seniors or graduate students in Chinese universities. It is hoped that this chapter, along with the others, will be helpful to MIS scholars and PhD/Masters research students in China who seek understanding of several central Information Systems (IS) research topics and related issues. The subject of this chapter - ‘Evaluating Information Systems’ - is broad, and cannot be addressed in its entirety in any depth within a single book chapter. The chapter proceeds from the truism that organizations have limited resources and those resources need to be invested in a way that provides greatest benefit to the organization. IT expenditure represents a substantial portion of any organization’s investment budget and IT related innovations have broad organizational impacts. Evaluation of the impact of this major investment is essential to justify this expenditure both pre- and post-investment. Evaluation is also important to prioritize possible improvements. The chapter (and most of the literature reviewed herein) admittedly assumes a blackbox view of IS/IT1, emphasizing measures of its consequences (e.g. for organizational performance or the economy) or perceptions of its quality from a user perspective. This reflects the MIS emphasis – a ‘management’ emphasis rather than a software engineering emphasis2, where a software engineering emphasis might be on the technical characteristics and technical performance. Though a black-box approach limits diagnostic specificity of findings from a technical perspective, it offers many benefits. In addition to superior management information, these benefits may include economy of measurement and comparability of findings (e.g. see Part 4 on Benchmarking IS). The chapter does not purport to be a comprehensive treatment of the relevant literature. It does, however, reflect many of the more influential works, and a representative range of important writings in the area. The author has been somewhat opportunistic in Part 2, employing a single journal – The Journal of Strategic Information Systems – to derive a classification of literature in the broader domain. Nonetheless, the arguments for this approach are believed to be sound, and the value from this exercise real. The chapter drills down from the general to the specific. It commences with a highlevel overview of the general topic area. This is achieved in 2 parts: - Part 1 addressing existing research in the more comprehensive IS research outlets (e.g. MISQ, JAIS, ISR, JMIS, ICIS), and Part 2 addressing existing research in a key specialist outlet (i.e. Journal of Strategic Information Systems). Subsequently, in Part 3, the chapter narrows to focus on the sub-topic ‘Information Systems Success Measurement’; then drilling deeper to become even more focused in Part 4 on ‘Benchmarking Information Systems’. In other words, the chapter drills down from Parts 1&2 Value of IS, to Part 3 Measuring Information Systems Success, to Part 4 Benchmarking IS. While the commencing Parts (1&2) are by definition broadly relevant to the chapter topic, the subsequent, more focused Parts (3 and 4) admittedly reflect the author’s more specific interests. Thus, the three chapter foci – value of IS, measuring IS success, and benchmarking IS - are not mutually exclusive, but, rather, each subsequent focus is in most respects a sub-set of the former. Parts 1&2, ‘the Value of IS’, take a broad view, with much emphasis on ‘the business Value of IS’, or the relationship between information technology and organizational performance. Part 3, ‘Information System Success Measurement’, focuses more specifically on measures and constructs employed in empirical research into the drivers of IS success (ISS). (DeLone and McLean 1992) inventoried and rationalized disparate prior measures of ISS into 6 constructs – System Quality, Information Quality, Individual Impact, Organizational Impact, Satisfaction and Use (later suggesting a 7th construct – Service Quality (DeLone and McLean 2003)). These 6 constructs have been used extensively, individually or in some combination, as the dependent variable in research seeking to better understand the important antecedents or drivers of IS Success. Part 3 reviews this body of work. Part 4, ‘Benchmarking Information Systems’, drills deeper again, focusing more specifically on a measure of the IS that can be used as a ‘benchmark’3. This section consolidates and extends the work of the author and his colleagues4 to derive a robust, validated IS-Impact measurement model for benchmarking contemporary Information Systems (IS). Though IS-Impact, like ISS, has potential value in empirical, causal research, its design and validation has emphasized its role and value as a comparator; a measure that is simple, robust and generalizable and which yields results that are as far as possible comparable across time, across stakeholders, and across differing systems and systems contexts.
Resumo:
In this paper I examine the recent arguments by Charles Foster, Jonathan Herring, Karen Melham and Tony Hope against the utility of the doctrine of double effect. One basis on which they reject the utility of the doctrine is their claim that it is notoriously difficult to apply what they identify as its 'core' component, namely, the distinction between intention and foresight. It is this contention that is the primarily focus of my article. I argue against this claim that the intention/foresight distinction remains a fundamental part of the law in those jurisdictions where intention remains an element of the offence of murder and that, accordingly, it is essential ro resolve the putative difficulties of applying the intention/foresight distinction so as to ensure the integrity of the law of murder. I argue that the main reasons advanced for the claim that the intention/foresight distinction is difficult to apply are ultimately unsustainable, and that the distinction is not as difficult to apply as the authors suggest.
Resumo:
According to certain arguments, computation is observer-relative either in the sense that many physical systems implement many computations (Hilary Putnam), or in the sense that almost all physical systems implement all computations (John Searle). If sound, these arguments have a potentially devastating consequence for the computational theory of mind: if arbitrary physical systems can be seen to implement arbitrary computations, the notion of computation seems to lose all explanatory power as far as brains and minds are concerned. David Chalmers and B. Jack Copeland have attempted to counter these relativist arguments by placing certain constraints on the definition of implementation. In this thesis, I examine their proposals and find both wanting in some respects. During the course of this examination, I give a formal definition of the class of combinatorial-state automata , upon which Chalmers s account of implementation is based. I show that this definition implies two theorems (one an observation due to Curtis Brown) concerning the computational power of combinatorial-state automata, theorems which speak against founding the theory of implementation upon this formalism. Toward the end of the thesis, I sketch a definition of the implementation of Turing machines in dynamical systems, and offer this as an alternative to Chalmers s and Copeland s accounts of implementation. I demonstrate that the definition does not imply Searle s claim for the universal implementation of computations. However, the definition may support claims that are weaker than Searle s, yet still troubling to the computationalist. There remains a kernel of relativity in implementation at any rate, since the interpretation of physical systems seems itself to be an observer-relative matter, to some degree at least. This observation helps clarify the role the notion of computation can play in cognitive science. Specifically, I will argue that the notion should be conceived as an instrumental rather than as a fundamental or foundational one.
Resumo:
Este trabalho apresenta um estudo da estabilidade das equações da inflação morna com um fluido de radiação viscoso. A viscosidade do fluido é proveniente do constante decaimento de partículas neste, devido à dissipação do campo escalar da inflação, o ínflaton.Esta viscosidade, que pode ser volumar ou laminar, é tratada em termos de teorias termodinâmicas fora do equilíbrio. Este estudo se limita às equações de fundo da inflação morna, de modo que somente a viscosidade volumar tem um efeito significativo, sendo a viscosidade laminar importante somente no contexto de perturbações cosmológicas. A descrição da viscosidade em termos de uma termodinâmica fora do equilíbrio, porém, não pode ser realizada univocamente, pois a única informação que temos sobre processos irreversíveis é a segunda lei da termodinâmica. Portanto, parte-se em busca de teorias que estejam de acordo com esta lei e que, por argumentos plausíveis, sejam capazes de descrever o comportamento dos fluxos dissipativos próximo ao equilíbrio. O objetivo deste trabalho é estudar a estabilidade da inflação morna viscosa para teorias causais e não causais para o fluido de radiação com viscosidade, de forma que se possa observar o impacto da viscosidade no regime inflacionário e a relevância de se passar a considerar a causalidade. Para o fluido de radiação, as teorias consideradas são a teoria não causal de Eckart e as teorias causais de Israel-Stewart e de Denicol et al (hidrodinâmica dissipativa causal não linear). Obtém-se que as teorias causais, como era de se esperar, além de serem, por definição, consistentes no tocante à finitude da velocidade de propagação dos fluxos dissipativos, tornam o sistema dinâmico estável para valores de viscosidade mais distantes do equilíbrio. Observa-se também, nitidamente, que a teoria de Denicol et al é a mais robusta nesse sentido. Este trabalho, portanto, visa dar continuidade ao estudo dos efeitos não-isentrópicos na inflação, já que, além da dissipação do ínflaton na inflação morna, o impacto da viscosidade tem despertado bastante interesse.
Resumo:
Agreement on response criteria in rheumatoid arthritis (RA) has allowed better standardization and interpretation of clinical trial reports. With recent advances in therapy, the proportion of patients achieving a satisfactory state of minimal disease activity (MDA) is becoming a more important measure with which to compare different treatment strategies. The threshold for MDA is between high disease activity and remission and, by definition, anyone in remission will also be in MDA. True remission is still rare in RA; in addition, the American College of Rheumatology definition is difficult to apply in the context of trials. Participants at OMERACT 6 in 2002 agreed on a conceptual definition of minimal disease activity (MDA): "that state of disease activity deemed a useful target of treatment by both the patient and the physician, given current treatment possibilities and limitations." To prepare for a preliminary operational definition of MDA for use in clinical trials, we asked rheumatologists to assess 60 patient profiles describing real RA patients seen in routine clinical practice. Based on their responses, several candidate definitions for MDA were designed and discussed at the OMERACT 7 in 2004. Feedback from participants and additional on-site analyses in a cross-sectional database allowed the formulation of 2 preliminary, equivalent definitions of MDA: one based on the Disease Activity Score 28 (DAS28) index, and one based on meeting cutpoints in 5 out the 7 WHO/ILAR core set measures. Researchers applying these definitions first need to choose whether to use the DAS28 or the core set definition, because although each selects a similar proportion in a population, these are not always the same patients. In both MDA definitions, an initial decision node places all patients in MDA who have a tender joint count of 0 and a swollen joint count of 0, and an erythrocyte sedimentation rate (ESR) no greater than 10 mm. If this condition is not met: center dot The DAS28 definition places patients in MDA when DAS28
Resumo:
Mesenchymal stem cells (MSCs) are non-hematopoietic multipotent stem cells capable to self-renew and differentiate along different cell lineages. MSCs can be found in adult tissues and extra embryonic tissues like the umbilical cord matrix/Wharton’s Jelly (WJ). The latter constitute a good source of MSCs, being more naïve and having a higher proliferative potential than MSCs from adult tissues like the bone marrow, turning them more appealing for clinical use. It is clear that MSCs modulate both innate and adaptive immune responses and its immunodulatory effects are wide, extending to T cells and dendritic cells, being therapeutically useful for treatment of immune system disorders. Mechanotransduction is by definition the mechanism by which cells transform mechanical signals translating that information into biochemical and morphological changes. Here, we hypothesize that by culturing WJ-MSCs on distinct substrates with different stiffness and biochemical composition, may influence the immunomodulatory capacity of the cells. Here, we showed that WJ-MSCs cultured on distinct PDMS substrates presented different secretory profiles from cells cultured on regular tissue culture polystyrene plates (TCP), showing higher secretion of several cytokines analysed. Moreover, it was also shown that WJ-MSCs cultured on PDMS substrates seems to possess higher immunomodulatory capabilities and to differentially regulate the functional compartments of T cells when compared to MSCs maintained on TCP. Taken together, our results suggest that elements of mechanotransduction seem to be influencing the immunomodulatory ability of MSCs, as well as their secretory profile. Thus, future strategies will be further explored to better understand these observation and to envisage new in vitro culture conditions for MSCs aiming at distinct therapeutic approaches, namely for immune-mediated disorders.
Resumo:
Monitoring Internet traffic is critical in order to acquire a good understanding of threats to computer and network security and in designing efficient computer security systems. Researchers and network administrators have applied several approaches to monitoring traffic for malicious content. These techniques include monitoring network components, aggregating IDS alerts, and monitoring unused IP address spaces. Another method for monitoring and analyzing malicious traffic, which has been widely tried and accepted, is the use of honeypots. Honeypots are very valuable security resources for gathering artefacts associated with a variety of Internet attack activities. As honeypots run no production services, any contact with them is considered potentially malicious or suspicious by definition. This unique characteristic of the honeypot reduces the amount of collected traffic and makes it a more valuable source of information than other existing techniques. Currently, there is insufficient research in the honeypot data analysis field. To date, most of the work on honeypots has been devoted to the design of new honeypots or optimizing the current ones. Approaches for analyzing data collected from honeypots, especially low-interaction honeypots, are presently immature, while analysis techniques are manual and focus mainly on identifying existing attacks. This research addresses the need for developing more advanced techniques for analyzing Internet traffic data collected from low-interaction honeypots. We believe that characterizing honeypot traffic will improve the security of networks and, if the honeypot data is handled in time, give early signs of new vulnerabilities or breakouts of new automated malicious codes, such as worms. The outcomes of this research include: • Identification of repeated use of attack tools and attack processes through grouping activities that exhibit similar packet inter-arrival time distributions using the cliquing algorithm; • Application of principal component analysis to detect the structure of attackers’ activities present in low-interaction honeypots and to visualize attackers’ behaviors; • Detection of new attacks in low-interaction honeypot traffic through the use of the principal component’s residual space and the square prediction error statistic; • Real-time detection of new attacks using recursive principal component analysis; • A proof of concept implementation for honeypot traffic analysis and real time monitoring.
Resumo:
In their studies, Eley and Meyer (2004) and Meyer and Cleary (1998) found that there are sources of variation in the affective and process dimensions of learning in mathematics and clinical diagnosis specific to each of these disciplines. Meyer and Shanahan (2002) argue that: General purpose models of student learning that are transportable across different discipline contexts cannot, by definition, be sensitive to sources of variation that may be subject-specific (2002. p. 204). In other words, to explain the differences in learning approaches and outcomes in a particular discipline, there are discipline-specific factors, which cannot be uncovered in general educational research. Meyer and Shanahan (2002) argue for a need to "seek additional sources of variation that are perhaps conceptually unique ... within the discourse of particular disciplines" (p. 204). In this paper, the development of an economics-specific construct (called economic thinking ability) is reported. The construct aims to measure discipline-sited ability of students that has important influence on learning in economics. Using this construct, economic thinking abilities of introductory and intermediate level economics students were measured prior to the commencement, and at the end, of their study over one semester. This enabled factors associated with students' pre-course economic thinking ability and their development in economic thinking ability to be investigated. The empirical findings will address the 'nature' versus 'nurture' debate in economics education (Frank, et aI., 1993; Frey et al., 1993; Haucap and Tobias 2003). The implications for future research in economics education will also be discussed.
Resumo:
Today’s evolving networks are experiencing a large number of different attacks ranging from system break-ins, infection from automatic attack tools such as worms, viruses, trojan horses and denial of service (DoS). One important aspect of such attacks is that they are often indiscriminate and target Internet addresses without regard to whether they are bona fide allocated or not. Due to the absence of any advertised host services the traffic observed on unused IP addresses is by definition unsolicited and likely to be either opportunistic or malicious. The analysis of large repositories of such traffic can be used to extract useful information about both ongoing and new attack patterns and unearth unusual attack behaviors. However, such an analysis is difficult due to the size and nature of the collected traffic on unused address spaces. In this dissertation, we present a network traffic analysis technique which uses traffic collected from unused address spaces and relies on the statistical properties of the collected traffic, in order to accurately and quickly detect new and ongoing network anomalies. Detection of network anomalies is based on the concept that an anomalous activity usually transforms the network parameters in such a way that their statistical properties no longer remain constant, resulting in abrupt changes. In this dissertation, we use sequential analysis techniques to identify changes in the behavior of network traffic targeting unused address spaces to unveil both ongoing and new attack patterns. Specifically, we have developed a dynamic sliding window based non-parametric cumulative sum change detection techniques for identification of changes in network traffic. Furthermore we have introduced dynamic thresholds to detect changes in network traffic behavior and also detect when a particular change has ended. Experimental results are presented that demonstrate the operational effectiveness and efficiency of the proposed approach, using both synthetically generated datasets and real network traces collected from a dedicated block of unused IP addresses.
Resumo:
For fruit flies, fully ripe fruit is preferred for adult oviposition and is superior for offspring performance over unripe or ripening fruit. Because not all parts of a single fruit ripen simultaneously, the opportunity exists for adult fruit flies to selectively choose riper parts of a fruit for oviposition and such selection, if it occurs, could positively influence offspring performance. Such fine scale host variation is rarely considered in fruit fly ecology, however, especially for polyphagous species which are, by definition, considered to be generalist host users. Here we study the adult oviposition preference/larval performance relationship of the Oriental fruit fly, Bactrocera dorsalis (Hendel) (Diptera: Tephritidae), a highly polyphagous pest species, at the “within-fruit” level to see if such a host use pattern occurs. We recorded the number of oviposition attempts that female flies made into three fruit portions (top, middle and bottom), and larval behavior and development within different fruit portions for ripening (color change) and fully-ripe mango, Mangifera indica L. (Anacardiaceae). Results indicate that female B. dorsalis do not oviposit uniformly across a mango fruit, but lay most often in the top (i.e., stalk end) of fruit and least in the bottom portion, regardless of ripening stage. There was no evidence of larval feeding site preference or performance (development time, pupal weight, percent pupation) being influenced by fruit portion, within or across the fruit ripening stages. There was, however, a very significant effect on adult emergence rate from pupae, with adult emergence rate from pupae from the bottom of ripening mango being approximately only 50% of the adult emergence rate from the top of ripening fruit, or from both the top and bottom of fully-ripe fruit. Differences in mechanical (firmness) and chemical (total soluble solids, titratable acidity, total non-structural carbohydrates) traits between different fruit portions were correlated with adult fruit utilisation. Our results support a positive adult preference/offspring performance relationship at within-fruit level for B. dorsalis. The fine level of host discrimination exhibited by B. dorsalis is at odds with the general perception that, as a polyphagous herbivore, the fly should show very little discrimination in its host use behavior.
Resumo:
It is a startling fact that when in the mid-80s a ‘third wave’ of democracy took hold in Latin America and Eastern Europe, both democracy and violence were simultaneously on the rise worldwide. Almost by definition democracies represent an institutionalized framework and a way of life that ensures non-violent means to share power between communities of people with widely differing values and beliefs. As Keane (2004) points out, ‘violence is anathema to [democracy’s] spirit and substance’ (p. 1). Accordingly, the process of democratization was accompanied by expectations that violence would generally decrease, and that these countries would embark on a process of reducing levels of violence as Western European countries had done earlier in the 19th and 20th century.