988 resultados para gap bilinear diffie hellman problem
Resumo:
Supersymmetric models with bilinear R-parity violation can account for the observed neutrino masses and mixing parameters indicated by neutrino oscillation data. We consider minimal supergravity versions of bilinear R-parity violation where the lightest supersymmetric particle is a neutralino. This is unstable, with a large enough decay length to be detected at the CERN Large Hadron Collider. We analyze the Large Hadron Collider potential to determine the lightest supersymmetric particle properties, such as mass, lifetime and branching ratios, and discuss their relation to neutrino properties.
Resumo:
The progresses of electron devices integration have proceeded for more than 40 years following the well–known Moore’s law, which states that the transistors density on chip doubles every 24 months. This trend has been possible due to the downsizing of the MOSFET dimensions (scaling); however, new issues and new challenges are arising, and the conventional ”bulk” architecture is becoming inadequate in order to face them. In order to overcome the limitations related to conventional structures, the researchers community is preparing different solutions, that need to be assessed. Possible solutions currently under scrutiny are represented by: • devices incorporating materials with properties different from those of silicon, for the channel and the source/drain regions; • new architectures as Silicon–On–Insulator (SOI) transistors: the body thickness of Ultra-Thin-Body SOI devices is a new design parameter, and it permits to keep under control Short–Channel–Effects without adopting high doping level in the channel. Among the solutions proposed in order to overcome the difficulties related to scaling, we can highlight heterojunctions at the channel edge, obtained by adopting for the source/drain regions materials with band–gap different from that of the channel material. This solution allows to increase the injection velocity of the particles travelling from the source into the channel, and therefore increase the performance of the transistor in terms of provided drain current. The first part of this thesis work addresses the use of heterojunctions in SOI transistors: chapter 3 outlines the basics of the heterojunctions theory and the adoption of such approach in older technologies as the heterojunction–bipolar–transistors; moreover the modifications introduced in the Monte Carlo code in order to simulate conduction band discontinuities are described, and the simulations performed on unidimensional simplified structures in order to validate them as well. Chapter 4 presents the results obtained from the Monte Carlo simulations performed on double–gate SOI transistors featuring conduction band offsets between the source and drain regions and the channel. In particular, attention has been focused on the drain current and to internal quantities as inversion charge, potential energy and carrier velocities. Both graded and abrupt discontinuities have been considered. The scaling of devices dimensions and the adoption of innovative architectures have consequences on the power dissipation as well. In SOI technologies the channel is thermally insulated from the underlying substrate by a SiO2 buried–oxide layer; this SiO2 layer features a thermal conductivity that is two orders of magnitude lower than the silicon one, and it impedes the dissipation of the heat generated in the active region. Moreover, the thermal conductivity of thin semiconductor films is much lower than that of silicon bulk, due to phonon confinement and boundary scattering. All these aspects cause severe self–heating effects, that detrimentally impact the carrier mobility and therefore the saturation drive current for high–performance transistors; as a consequence, thermal device design is becoming a fundamental part of integrated circuit engineering. The second part of this thesis discusses the problem of self–heating in SOI transistors. Chapter 5 describes the causes of heat generation and dissipation in SOI devices, and it provides a brief overview on the methods that have been proposed in order to model these phenomena. In order to understand how this problem impacts the performance of different SOI architectures, three–dimensional electro–thermal simulations have been applied to the analysis of SHE in planar single and double–gate SOI transistors as well as FinFET, featuring the same isothermal electrical characteristics. In chapter 6 the same simulation approach is extensively employed to study the impact of SHE on the performance of a FinFET representative of the high–performance transistor of the 45 nm technology node. Its effects on the ON–current, the maximum temperatures reached inside the device and the thermal resistance associated to the device itself, as well as the dependence of SHE on the main geometrical parameters have been analyzed. Furthermore, the consequences on self–heating of technological solutions such as raised S/D extensions regions or reduction of fin height are explored as well. Finally, conclusions are drawn in chapter 7.
Resumo:
A fundamental gap in the current understanding of collapsed structures in the universe concerns the thermodynamical evolution of the ordinary, baryonic component. Unopposed radiative cooling of plasma would lead to the cooling catastrophe, a massive inflow of condensing gas toward the centre of galaxies, groups and clusters. The last generation of multiwavelength observations has radically changed our view on baryons, suggesting that the heating linked to the active galactic nucleus (AGN) may be the balancing counterpart of cooling. In this Thesis, I investigate the engine of the heating regulated by the central black hole. I argue that the mechanical feedback, based on massive subrelativistic outflows, is the key to solving the cooling flow problem, i.e. dramatically quenching the cooling rates for several billion years without destroying the cool-core structure. Using an upgraded version of the parallel 3D hydrodynamic code FLASH, I show that anisotropic AGN outflows can further reproduce fundamental observed features, such as buoyant bubbles, cocoon shocks, sonic ripples, metals dredge-up, and subsonic turbulence. The latter is an essential ingredient to drive nonlinear thermal instabilities, which cause cold gas condensation, a residual of the quenched cooling flow and, later, fuel for the AGN feedback engine. The self-regulated outflows are systematically tested on the scales of massive clusters, groups and isolated elliptical galaxies: in lighter less bound objects the feedback needs to be gentler and less efficient, in order to avoid drastic overheating. In this Thesis, I describe in depth the complex hydrodynamics, involving the coupling of the feedback energy to that of the surrounding hot medium. Finally, I present the merits and flaws of all the proposed models, with a critical eye toward observational concordance.
Resumo:
This project intertwines philosophical and historico-literary themes, taking as its starting point the concept of tragic consciousness inherent in the epoch of classicism. The research work makes use of ontological categories in order to describe the underlying principles of the image of the world which was created in philosophical and scientific theories of the 17th century as well as in contemporary drama. Using these categories brought Mr. Vilk to the conclusion that the classical picture of the world implied a certain dualism; not the Manichaean division between light and darkness but the discrimination between nature and absolute being, i.e. God. Mr. Vilk begins with an examination of the philosophical essence of French classical theatre of the XVII and XVIII centuries. The history of French classical tragedy can be divided into three periods: from the mid 17th to early 19th centuries when it triumphed all over France and exerted a powerful influence over almost all European countries; followed by the period of its rejection by the Romantics, who declared classicism to be "artificial and rational"; and finally our own century which has taken a more moderate line. Nevertheless, French classical tragedy has never fully recovered its status. Instead, it is ancient tragedy and the works of Shakespeare that are regarded to be the most adequate embodiment of the tragic. Consequently they still provoke a great number of new interpretations ranging from specialised literary criticism to more philosophical rumination. An important feature of classical tragedy is a system of rules and unities which reveals a hidden ontological structure of the world. The ontological picture of the dramatic world can be described in categories worked out by medieval philosophy - being, essence and existence. The first category is to be understood as a tendency toward permanency and stability (within eternity) connected with this or that fragment of dramatic reality. The second implies a certain set of permanent elements that make up the reality. And the third - existence - should be understood as "an act of being", as a realisation of permanently renewed processes of life. All of these categories can be found in every artistic reality but the accents put on one or another and their interrelations create different ontological perspectives. Mr. Vilk plots the movement of thought, expressed in both philosophical and scientific discourses, away from Aristotle's essential forms, and towards a prioritising of existence, and shows how new forms of literature and drama structured the world according to these evolving requirements. At the same time the world created in classical tragedy fully preserves another ontological paradigm - being - as a fundamental permanence. As far as the tragic hero's motivations are concerned this paradigm is revealed in the dedication of his whole self to some cause, and his oath of fidelity, attitudes which shape his behaviour. It may be the idea of the State, or personal honour, or something borrowed from the emotional sphere, passionate love. Mr. Vilk views the conflicting ambivalence of existence and being, duty as responsibility and duty as fidelity, as underlying the main conflict of classical tragedy of the 17th century. Having plotted the movement of the being/existence duality through its manifestations in 17th century tragedy, Mr. Vilk moves to the 18th century, when tragedy took a philosophical turn. A dualistic view of the world became supplanted by the Enlightenment idea of a natural law, rooted in nature. The main point of tragedy now was to reveal that such conflicts as might take place had an anti-rational nature, that they arose as the result of a kind of superstition caused by social reasons. These themes Mr. Vilk now pursues through Russian dramatists of the 18th and early 19th centuries. He begins with Sumarakov, whose philosophical thought has a religious bias. According to Sumarakov, the dualism of the divineness and naturalness of man is on the one hand an eternal paradox, and on the other, a moral challenge for humans to try to unite the two opposites. His early tragedies are not concerned with social evils or the triumph of natural feelings and human reason, but rather the tragic disharmony in the nature of man and the world. Mr Vilk turns next to the work of Kniazhnin. He is particularly keen to rescue his reputation from the judgements of critics who accuse him of being imitative, and in order to do so, analyses in detail the tragedy "Dido", in which Kniazhnin makes an attempt to revive the image of great heroes and city-founders. Aeneas represents the idea of the "being" of Troy, his destiny is the re-establishment of the city (the future Rome). The moral aspect behind this idea is faithfulness, he devotes himself to Gods. Dido is also the creator of a city, endowed with "natural powers" and abilities, but her creation is lacking internal stability grounded in "being". The unity of the two motives is only achieved through Dido's sacrifice of herself and her city to Aeneus. Mr Vilk's next subject is Kheraskov, whose peculiarity lies in the influence of free-mason mysticism on his work. This section deals with one of the most important philosophical assumptions contained in contemporary free-mason literature of the time - the idea of the trinitarian hierarchy inherent in man and the world: body - soul - spirit, and nature - law - grace. Finally, Mr. Vilk assess the work of Ozerov, the last major Russian tragedian. The tragedies which earned him fame, "Oedipus in Athens", "Fingal" and "Dmitri Donskoi", present a compromise between the Enlightenment's emphasis on harmony and ontological tragic conflict. But it is in "Polixene" that a real meeting of the Russian tradition with the age-old history of the genre takes place. The male and female characters of "Polixene" distinctly express the elements of "being" and "existence". Each of the participants of the conflict possesses some dominant characteristic personifying a certain indispensable part of the moral world, a certain "virtue". But their independent efforts are unable to overcome the ontological gap separating them. The end of the tragedy - Polixene's sacrificial self-immolation - paradoxically combines the glorification of each party involved in the conflict, and their condemnation. The final part of Mr. Vilk's research deals with the influence of "Polixene" upon subsequent dramatic art. In this respect Katenin's "Andromacha", inspired by "Polixene", is important to mention. In "Andromacha" a decisive divergence from the principles of the philosophical tragedy of Russian classicism and the ontology of classicism occurs: a new character appears as an independent personality, directed by his private interest. It was Katenin who was to become the intermediary between Pushkin and classical tragedy.
Resumo:
During the first Kibaki administration (2002-2007), a movement by the former Mau Mau fighters demanded recognition for the role that they had played in the achievement of independence. They began to demand, also, monetary compensation for past injustices. Why had it taken over 40 years (from independence in 1963) for the former Mau Mau fighters to initiate this movement? What can be observed as the outcome of their movement? To answer these questions, three different historical currents need to be taken into account. These were, respectively, changing trends in the government of Kenya, progress in historical research into the actual circumstances of colonial control, and a realization, based on mounting experience, that launching a legal action against Britain could turn out to be a lucrative initiative. This paper concludes that, regardless of the actual purpose of the legal case, neither of their objectives was certain to be achieved. Two inescapable realities remain: the doubts cast on the reputation of the government by its decision to lift the Mau Mau‟s outlaw status – a decision that was widely seen as a latter-day example of the „Kikuyu favouritism‟ policy followed by the first Kibaki administration – and the popular interpretation of the involvement of Leigh Day, well known in Kenya ever since the unexploded bombs case for its success in obtaining substantial compensation payments, as a vehicle for squeezing large amounts of money from the British government for the benefit of the Kikuyu people.
Resumo:
Problem-based learning has been applied over the last three decades to a diverse range of learning environments. In this educational approach, different problems are posed to the learners so that they can develop different solutions while learning about the problem domain. When applied to conceptual modelling, and particularly to Qualitative Reasoning, the solutions to problems are models that represent the behaviour of a dynamic system. The learner?s task then is to bridge the gap between their initial model, as their first attempt to represent the system, and the target models that provide solutions to that problem. We propose the use of semantic technologies and resources to help in bridging that gap by providing links to terminology and formal definitions, and matching techniques to allow learners to benefit from existing models.
Resumo:
In maritime transportation, decisions are made in a dynamic setting where many aspects of the future are uncertain. However, most academic literature on maritime transportation considers static and deterministic routing and scheduling problems. This work addresses a gap in the literature on dynamic and stochastic maritime routing and scheduling problems, by focusing on the scheduling of departure times. Five simple strategies for setting departure times are considered, as well as a more advanced strategy which involves solving a mixed integer mathematical programming problem. The latter strategy is significantly better than the other methods, while adding only a small computational effort.
Resumo:
Article is devoted to design of optimum electromagnets for magnetic levitation of transport systems. The method of electromagnets design based on the inverse problem solution of electrical equipment is offered. The method differs from known by introducing a stage of minimization the target functions providing the stated levitation force and magnetic induction in a gap, and also the mass of an electromagnet. Initial values of parameters are received, using approximate formulas of the theory of electric devices and electrical equipment. The example of realization of a method is given. The received results show its high efficiency at design. It is practical to use the offered method and the computer program realizing it as a part of system of the automated design of electric equipment for transport with a magnetic levitation.
Resumo:
There has been a significant gap in the gambling literature regarding the role of culture in gambling and problem gambling (PG). This paper aims to reduce this gap by presenting a systematic review of the cultural variations in gambling and PG as well as a discussion of the role cultural variables can play in the initiation and maintenance of gambling in order to stimulate further research. The review shows that although studies investigating prevalence rates of gambling and PG among different cultures are not plentiful, evidence does suggest certain cultural groups are more vulnerable to begin gambling and to develop PG. Significant factors including familial/genetic, sociological, and individual factors have been found in the Western gambling literature as playing important roles in the development and maintenance of PG. These factors need to be examined now in other cultural groups so we can better understand the etiological processes involved in PG and design culturally sensitive treatments. In addition, variables, such as cultural values and beliefs, the process of acculturation, and the influence of culturally determined help-seeking behaviors need to be also examined in relation to the role they could play in the initiation of and maintenance of gambling. Understanding the contribution of cultural variables will allow us to devise better prevention and treatment options for PG. Methodological problems in this area of research are highlighted, and suggestions for future research are included. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Research expeditions into remote areas to collect biological specimens provide vital information for understanding biodiversity. However, major expeditions to little-known areas are expensive and time consuming, time is short, and well-trained people are difficult to find. In addition, processing the collections and obtaining accurate identifications takes time and money. In order to get the maximum return for the investment, we need to determine the location of the collecting expeditions carefully. In this study we used environmental variables and information on existing collecting localities to help determine the sites of future expeditions. Results from other studies were used to aid in the selection of the environmental variables, including variables relating to temperature, rainfall, lithology and distance between sites. A survey gap analysis tool based on 'ED complementarity' was employed to select the sites that would most likely contribute the most new taxa. The tool does not evaluate how well collected a previously visited site survey site might be; however, collecting effort was estimated based on species accumulation curves. We used the number of collections and/or number of species at each collecting site to eliminate those we deemed poorly collected. Plants, birds, and insects from Guyana were examined using the survey gap analysis tool, and sites for future collecting expeditions were determined. The south-east section of Guyana had virtually no collecting information available. It has been inaccessible for many years for political reasons and as a result, eight of the first ten sites selected were in that area. In order to evaluate the remainder of the country, and because there are no immediate plans by the Government of Guyana to open that area to exploration, that section of the country was not included in the remainder of the study. The range of the ED complementarity values dropped sharply after the first ten sites were selected. For plants, the group for which we had the most records, areas selected included several localities in the Pakaraima Mountains, the border with the south-east, and one site in the north-west. For birds, a moderately collected group, the strongest need was in the north-west followed by the east. Insects had the smallest data set and the largest range of ED complementarity values; the results gave strong emphasis to the southern parts of the country, but most of the locations appeared to be equidistant from one another, most likely because of insufficient data. Results demonstrate that the use of a survey gap analysis tool designed to solve a locational problem using continuous environmental data can help maximize our resources for gathering new information on biodiversity. (c) 2005 The Linnean Society of London.
Resumo:
A phenomenon common to almost all fields is that there is a gap between theory and practical implementation. However, this is a particular problem in knowledge management, where much of the literature consists of general principles written in the context of a ‘knowledge world’ that has few, if any, references to how to carry out knowledge management in organisations. In this chapter, we put forward the view that the best way to bridge this gap between general principles and the specific issues facing a given organisation is to link knowledge management to the organisation’s business processes. After briefly reviewing, and rejecting alternative ways in which this gap might be bridged, the chapter goes on to explain the justification for, and the potential benefits and snags of, linking knowledge management to business processes. Successful and unsuccessful examples are presented. We concentrate especially on the issues of establishing what knowledge is relevant to an organisation at present, the need for organisational learning to cope with the inevitable change, and the additional problems posed by the growing internationalisation of operations. We conclude that linking knowledge management in terms of business processes is the best route for organisations to follow, but that it is not the answer to all knowledge management problems, especially where different cultures and/or cultural change are involved.
Resumo:
Personal selling and sales management play a critical role in the short and long term success of the firm, and have thus received substantial academic interest since the 1970s. Sales research has examined the role of the sales manager in some depth, defining a number of key technical and interpersonal roles which sales managers have in influencing sales force effectiveness. However, one aspect of sales management which appears to remain unexplored is that of their resolution of salesperson-related problems. This study represents the first attempt to address this gap by reporting on the conceptual and empirical development of an instrument designed to measure sales managers' problem resolution styles. A comprehensive literature review and qualitative research study identified three key constructs relating to sales managers' problem resolution styles. The three constructs identified were termed; sales manager willingness to respond, sales manager caring, and sales manager aggressiveness. Building on this, existing literature was used to develop a conceptual model of salesperson-specific consequences of the three problem resolution style constructs. The quantitative phase of the study consisted of a mail survey of UK salespeople, achieving a total sample of 140 fully usable responses. Rigorous statistical assessment of the sales manager problem resolution style measures was undertaken, and construct validity examined. Following this, the conceptual model was tested using latent variable path analysis. The results for the model were encouraging overall, and also with regard to the individual hypotheses. Sales manager problem resolution styles were found individually to have significant impacts on the salesperson-specific variables of role ambiguity, emotional exhaustion, job satisfaction, organisational commitment and organisational citizenship behaviours. The findings, theoretical and managerial implications, limitations and directions for future research are discussed.
Resumo:
AMS subject classification: 90C31, 90A09, 49K15, 49L20.
Resumo:
*Designated as an exemplary master's project for 2015-16*
The American approach to disparities in educational achievement is deficit focused and based on false assumptions of equal educational opportunity and social mobility. The labels attached to children served by compensatory early childhood education programs have evolved, e.g., from “culturally deprived” into “at-risk” for school failure, yet remain rooted in deficit discourses and ideology. Drawing on multiple bodies of literature, this thesis analyzes the rhetoric of compensatory education as viewed through the conceptual lens of the deficit thinking paradigm, in which school failure is attributed to perceived genetic, cultural, or environmental deficiencies, rather than institutional and societal inequalities. With a focus on the evolution of deficit thinking, the thesis begins with late 19th century U.S. early childhood education as it set the stage for more than a century of compensatory education responses to the needs of children, inadequacies of immigrant and minority families, and threats to national security. Key educational research and publications on genetic-, cultural-, and environmental-deficits are aligned with trends in achievement gaps and compensatory education initiatives, beginning mid-20th century following the Brown vs Board declaration of 1954 and continuing to the present. This analysis then highlights patterns in the oppression, segregation, and disenfranchisement experienced by low-income and minority students, largely ignored within the mainstream compensatory education discourse. This thesis concludes with a heterodox analysis of how the deficit thinking paradigm is dependent on assumptions of equal educational opportunity and social mobility, which helps perpetuate the cycle of school failure amid larger social injustices.
Resumo:
Empirical studies of education programs and systems, by nature, rely upon use of student outcomes that are measurable. Often, these come in the form of test scores. However, in light of growing evidence about the long-run importance of other student skills and behaviors, the time has come for a broader approach to evaluating education. This dissertation undertakes experimental, quasi-experimental, and descriptive analyses to examine social, behavioral, and health-related mechanisms of the educational process. My overarching research question is simply, which inside- and outside-the-classroom features of schools and educational interventions are most beneficial to students in the long term? Furthermore, how can we apply this evidence toward informing policy that could effectively reduce stark social, educational, and economic inequalities?
The first study of three assesses mechanisms by which the Fast Track project, a randomized intervention in the early 1990s for high-risk children in four communities (Durham, NC; Nashville, TN; rural PA; and Seattle, WA), reduced delinquency, arrests, and health and mental health service utilization in adolescence through young adulthood (ages 12-20). A decomposition of treatment effects indicates that about a third of Fast Track’s impact on later crime outcomes can be accounted for by improvements in social and self-regulation skills during childhood (ages 6-11), such as prosocial behavior, emotion regulation and problem solving. These skills proved less valuable for the prevention of mental and physical health problems.
The second study contributes new evidence on how non-instructional investments – such as increased spending on school social workers, guidance counselors, and health services – affect multiple aspects of student performance and well-being. Merging several administrative data sources spanning the 1996-2013 school years in North Carolina, I use an instrumental variables approach to estimate the extent to which local expenditure shifts affect students’ academic and behavioral outcomes. My findings indicate that exogenous increases in spending on non-instructional services not only reduce student absenteeism and disciplinary problems (important predictors of long-term outcomes) but also significantly raise student achievement, in similar magnitude to corresponding increases in instructional spending. Furthermore, subgroup analyses suggest that investments in student support personnel such as social workers, health services, and guidance counselors, in schools with concentrated low-income student populations could go a long way toward closing socioeconomic achievement gaps.
The third study examines individual pathways that lead to high school graduation or dropout. It employs a variety of machine learning techniques, including decision trees, random forests with bagging and boosting, and support vector machines, to predict student dropout using longitudinal administrative data from North Carolina. I consider a large set of predictor measures from grades three through eight including academic achievement, behavioral indicators, and background characteristics. My findings indicate that the most important predictors include eighth grade absences, math scores, and age-for-grade as well as early reading scores. Support vector classification (with a high cost parameter and low gamma parameter) predicts high school dropout with the highest overall validity in the testing dataset at 90.1 percent followed by decision trees with boosting and interaction terms at 89.5 percent.