934 resultados para upscale extensions
Resumo:
We analyse the interplay between the Higgs to diphoton rate and electroweak precision measurements constraints in extensions of the Standard Model with new uncolored charged fermions that do not mix with the ordinary ones. We also compute the pair production cross sections for the lightest fermion and compare them with current bounds.
Resumo:
The forest-like characteristics of agroforestry systems create a unique opportunity to combine agricultural production with biodiversity conservation in human-modified tropical landscapes. The cacao-growing region in southern Bahia, Brazil, encompasses Atlantic forest remnants and large extensions of agroforests, locally known as cabrucas, and harbors several endemic large mammals. Based on the differences between cabrucas and forests, we hypothesized that: (1) non-native and non-arboreal mammals are more frequent, whereas exclusively arboreal and hunted mammals are less frequent in cabrucas than forests; (2) the two systems differ in mammal assemblage structure, but not in species richness; and (3) mammal assemblage structure is more variable among cabrucas than forests. We used camera-traps to sample mammals in nine pairs of cabruca-forest sites. The high conservation value of agroforests was supported by the presence of species of conservation concern in cabrucas, and similar species richness and composition between forests and cabrucas. Arboreal species were less frequently recorded, however, and a non-native and a terrestrial species adapted to open environments (Cerdocyon thous) were more frequently recorded in cabrucas. Factors that may overestimate the conservation value of cabrucas are: the high proportion of total forest cover in the study landscape, the impoverishment of large mammal fauna in forest, and uncertainty about the long-term maintenance of agroforestry systems. Our results highlight the importance of agroforests and forest remnants for providing connectivity in human-modified tropical forest landscapes, and the importance of controlling hunting and dogs to increase the value of agroforestry mosaics.
Resumo:
Among the soils in the Mato Grosso do Sul, stand out in the Pantanal biome, the Spodosols. Despite being recorded in considerable extensions, few studies aiming to characterize and classify these soils were performed. The purpose of this study was to characterize and classify soils in three areas of two physiographic types in the Taquari river basin: bay and flooded fields. Two trenches were opened in the bay area (P1 and P2) and two in the flooded field (P3 and P4). The third area (saline) with high sodium levels was sampled for further studies. In the soils in both areas the sand fraction was predominant and the texture from sand to sandy loam, with the main constituent quartz. In the bay area, the soil organic carbon in the surface layer (P1) was (OC) > 80 g kg(-1), being diagnosed as Histic epipedon. In the other profiles the surface horizons had low OC levels which, associated with other properties, classified them as Ochric epipedons. In the soils of the bay area (P1 and P2), the pH ranged from 5.0 to 7.5, associated with dominance of Ca2+ and Mg2+, with base saturation above 50 % in some horizons. In the flooded fields (P3 and P4) the soil pH ranged from 4.9 to 5.9, H+ contents were high in the surface horizons (0.8-10.5 cmol(c) kg(-1)), Ca2+ and Mg-2 contents ranged from 0.4 to 0.8 cmol(c) kg(-1) and base saturation was < 50 %. In the soils of the bay area (P1 and P2) iron was accumulated (extracted by dithionite - Fed) and OC in the spodic horizon; in the P3 and P4 soils only Fed was accumulated (in the subsurface layers). According to the criteria adopted by the Brazilian System of Soil Classification (SiBCS) at the subgroup level, the soils were classified as: P1: Organic Hydromorphic Ferrohumiluvic Spodosol. P2: Typical Orthic Ferrohumiluvic Spodosol. P3: Typical Hydromorphic Ferroluvic Spodosol. P4: Arenic Orthic Ferroluvic Spodosol.
Resumo:
The aim of this study was to analyze the rat temporomandibular joint (TMJ) synovial membrane at different ages using light, scanning, and transmission electron microscopy. Under light microscopic analysis, the TMJ structures were observed such as condyle, capsule, disk, the synovial membrane collagen type, and cells distribution. In the scanning electron microscopy, the synovial membrane surface exhibited a smooth aspect in young animals and there was an increase with ageing in the number of folds. The transmission electron microscopic analysis showed more synoviocytes in the synovial layer in the young group and still a great number of vesicles and cisterns dilation of rough endoplasmic reticulum in the aged group. In the three groups, a dense layer of collagen fibers in the synovial layer and cytoplasmic extensions were clearly seen. It was possible to conclude that synovial membrane structures in aged group showed alterations contributing to the decrease in joint lubrication and in the sliding between disk and joint surfaces. These characteristic will reflect in biomechanics of chewing, and may cause the TMJ disorders, currently observed in clinical processes. Microsc. Res. Tech. (c) 2012 Wiley Periodicals, Inc.
Resumo:
Among the soils in the Mato Grosso do Sul, stand out in the Pantanal biome, the Spodosols. Despite being recorded in considerable extensions, few studies aiming to characterize and classify these soils were performed. The purpose of this study was to characterize and classify soils in three areas of two physiographic types in the Taquari river basin: bay and flooded fields. Two trenches were opened in the bay area (P1 and P2) and two in the flooded field (P3 and P4). The third area (saline) with high sodium levels was sampled for further studies. In the soils in both areas the sand fraction was predominant and the texture from sand to sandy loam, with the main constituent quartz. In the bay area, the soil organic carbon in the surface layer (P1) was (OC) > 80 g kg-1, being diagnosed as Histic epipedon. In the other profiles the surface horizons had low OC levels which, associated with other properties, classified them as Ochric epipedons. In the soils of the bay area (P1 and P2), the pH ranged from 5.0 to 7.5, associated with dominance of Ca2+ and Mg2+, with base saturation above 50 % in some horizons. In the flooded fields (P3 and P4) the soil pH ranged from 4.9 to 5.9, H+ contents were high in the surface horizons (0.8-10.5 cmol c kg-1 ), Ca2+ and Mg² contents ranged from 0.4 to 0.8 cmol c kg-1 and base saturation was < 50 %. In the soils of the bay area (P1 and P2) iron was accumulated (extracted by dithionite - Fed) and OC in the spodic horizon; in the P3 and P4 soils only Fed was accumulated (in the subsurface layers). According to the criteria adopted by the Brazilian System of Soil Classification (SiBCS) at the subgroup level, the soils were classified as: P1: Organic Hydromorphic Ferrohumiluvic Spodosol. P2: Typical Orthic Ferrohumiluvic Spodosol. P3: Typical Hydromorphic Ferroluvic Spodosol. P4: Arenic Orthic Ferroluvic Spodosol.
Resumo:
The thesis consists of three independent parts. Part I: Polynomial amoebas We study the amoeba of a polynomial, as de ned by Gelfand, Kapranov and Zelevinsky. A central role in the treatment is played by a certain convex function which is linear in each complement component of the amoeba, which we call the Ronkin function. This function is used in two di erent ways. First, we use it to construct a polyhedral complex, which we call a spine, approximating the amoeba. Second, the Monge-Ampere measure of the Ronkin function has interesting properties which we explore. This measure can be used to derive an upper bound on the area of an amoeba in two dimensions. We also obtain results on the number of complement components of an amoeba, and consider possible extensions of the theory to varieties of codimension higher than 1. Part II: Differential equations in the complex plane We consider polynomials in one complex variable arising as eigenfunctions of certain differential operators, and obtain results on the distribution of their zeros. We show that in the limit when the degree of the polynomial approaches innity, its zeros are distributed according to a certain probability measure. This measure has its support on the union of nitely many curve segments, and can be characterized by a simple condition on its Cauchy transform. Part III: Radon transforms and tomography This part is concerned with different weighted Radon transforms in two dimensions, in particular the problem of inverting such transforms. We obtain stability results of this inverse problem for rather general classes of weights, including weights of attenuation type with data acquisition limited to a 180 degrees range of angles. We also derive an inversion formula for the exponential Radon transform, with the same restriction on the angle.
Resumo:
The nature of the dark matter in the Universe is one of the greatest mysteries in modern astronomy. The neutralino is a nonbaryonic dark matter candidate in minimal supersymmetric extensions to the standard model of particle physics. If the dark matter halo of our galaxy is made up of neutralinos some would become gravitationally trapped inside massive bodies like the Earth. Their pair-wise annihilation produces neutrinos that can be detected by neutrino experiments looking in the direction of the centre of the Earth. The AMANDA neutrino telescope, currently the largest in the world, consists of an array of light detectors buried deep in the Antarctic glacier at the geographical South Pole. The extremely transparent ice acts as a Cherenkov medium for muons passing the array and using the timing information of detected photons it is possible to reconstruct the muon direction. A search has been performed for nearly vertically upgoing neutrino induced muons with AMANDA-B10 data taken over the three year period 1997-99. No excess above the atmospheric neutrino background expectation was found. Upper limits at the 90 % confidence level has been set on the annihilation rate of neutralinos at the centre of the Earth and on the muon flux induced by neutrinos created by the annihilation products.
Resumo:
Interaction protocols establish how different computational entities can interact with each other. The interaction can be finalized to the exchange of data, as in 'communication protocols', or can be oriented to achieve some result, as in 'application protocols'. Moreover, with the increasing complexity of modern distributed systems, protocols are used also to control such a complexity, and to ensure that the system as a whole evolves with certain features. However, the extensive use of protocols has raised some issues, from the language for specifying them to the several verification aspects. Computational Logic provides models, languages and tools that can be effectively adopted to address such issues: its declarative nature can be exploited for a protocol specification language, while its operational counterpart can be used to reason upon such specifications. In this thesis we propose a proof-theoretic framework, called SCIFF, together with its extensions. SCIFF is based on Abductive Logic Programming, and provides a formal specification language with a clear declarative semantics (based on abduction). The operational counterpart is given by a proof procedure, that allows to reason upon the specifications and to test the conformance of given interactions w.r.t. a defined protocol. Moreover, by suitably adapting the SCIFF Framework, we propose solutions for addressing (1) the protocol properties verification (g-SCIFF Framework), and (2) the a-priori conformance verification of peers w.r.t. the given protocol (AlLoWS Framework). We introduce also an agent based architecture, the SCIFF Agent Platform, where the same protocol specification can be used to program and to ease the implementation task of the interacting peers.
Resumo:
Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.
Resumo:
The sustained demand for faster,more powerful chips has beenmet by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SOC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MPSOC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NOCS) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the on-chip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation focuses on all of the above points, by describing a NoC architectural implementation called ×pipes; a NoC simulation environment within a cycle-accurate MPSoC emulator called MPARM; a NoC design flow consisting of a front-end tool for optimal NoC instantiation, called SunFloor, and a set of back-end facilities for the study of NoC physical implementations. This dissertation proves the viability of NoCs for current and upcoming designs, by outlining their advantages (alongwith a fewtradeoffs) and by providing a full NoC implementation framework. It also presents some examples of additional extensions of NoCs, allowing e.g. for increased fault tolerance, and outlines where NoCsmay find further application scenarios, such as in stacked chips.
Resumo:
The progresses of electron devices integration have proceeded for more than 40 years following the well–known Moore’s law, which states that the transistors density on chip doubles every 24 months. This trend has been possible due to the downsizing of the MOSFET dimensions (scaling); however, new issues and new challenges are arising, and the conventional ”bulk” architecture is becoming inadequate in order to face them. In order to overcome the limitations related to conventional structures, the researchers community is preparing different solutions, that need to be assessed. Possible solutions currently under scrutiny are represented by: • devices incorporating materials with properties different from those of silicon, for the channel and the source/drain regions; • new architectures as Silicon–On–Insulator (SOI) transistors: the body thickness of Ultra-Thin-Body SOI devices is a new design parameter, and it permits to keep under control Short–Channel–Effects without adopting high doping level in the channel. Among the solutions proposed in order to overcome the difficulties related to scaling, we can highlight heterojunctions at the channel edge, obtained by adopting for the source/drain regions materials with band–gap different from that of the channel material. This solution allows to increase the injection velocity of the particles travelling from the source into the channel, and therefore increase the performance of the transistor in terms of provided drain current. The first part of this thesis work addresses the use of heterojunctions in SOI transistors: chapter 3 outlines the basics of the heterojunctions theory and the adoption of such approach in older technologies as the heterojunction–bipolar–transistors; moreover the modifications introduced in the Monte Carlo code in order to simulate conduction band discontinuities are described, and the simulations performed on unidimensional simplified structures in order to validate them as well. Chapter 4 presents the results obtained from the Monte Carlo simulations performed on double–gate SOI transistors featuring conduction band offsets between the source and drain regions and the channel. In particular, attention has been focused on the drain current and to internal quantities as inversion charge, potential energy and carrier velocities. Both graded and abrupt discontinuities have been considered. The scaling of devices dimensions and the adoption of innovative architectures have consequences on the power dissipation as well. In SOI technologies the channel is thermally insulated from the underlying substrate by a SiO2 buried–oxide layer; this SiO2 layer features a thermal conductivity that is two orders of magnitude lower than the silicon one, and it impedes the dissipation of the heat generated in the active region. Moreover, the thermal conductivity of thin semiconductor films is much lower than that of silicon bulk, due to phonon confinement and boundary scattering. All these aspects cause severe self–heating effects, that detrimentally impact the carrier mobility and therefore the saturation drive current for high–performance transistors; as a consequence, thermal device design is becoming a fundamental part of integrated circuit engineering. The second part of this thesis discusses the problem of self–heating in SOI transistors. Chapter 5 describes the causes of heat generation and dissipation in SOI devices, and it provides a brief overview on the methods that have been proposed in order to model these phenomena. In order to understand how this problem impacts the performance of different SOI architectures, three–dimensional electro–thermal simulations have been applied to the analysis of SHE in planar single and double–gate SOI transistors as well as FinFET, featuring the same isothermal electrical characteristics. In chapter 6 the same simulation approach is extensively employed to study the impact of SHE on the performance of a FinFET representative of the high–performance transistor of the 45 nm technology node. Its effects on the ON–current, the maximum temperatures reached inside the device and the thermal resistance associated to the device itself, as well as the dependence of SHE on the main geometrical parameters have been analyzed. Furthermore, the consequences on self–heating of technological solutions such as raised S/D extensions regions or reduction of fin height are explored as well. Finally, conclusions are drawn in chapter 7.
Resumo:
In dieser Arbeit werden zehn neue symmetrische (176,50,14) Designs und ein neues symmetrisches (144,66,30) Design durch Vorgabe von nichtauflösbaren Automorphismengruppen konstruiert. Im Jahre 1969 entdeckte G. Higman ein symmetrisches (176,50,14) Design, dessen volle Automorphismengruppe die sporadische einfache Gruppe HS der Ordnung 44.352.000 ist. Hier wurden nun Designs gesucht, die eine Untergruppe von HS zulassen. Folgende Untergruppen wurden betrachtet: die transitive und die intransitive Erweiterung einer elementarabelschen Gruppe der Ordnung 16 durch Alt(5), AGL(3,2), das direkte Produkt einer zyklischen Gruppe der Ordnung 5 mit Alt(5) und PSL(2,11). Die transitive Erweiterung von E(16) durch Alt(5) lieferte zwei neue Designs mit Automorphismengruppen der Ordnungen 960 bzw. 11.520; letzteres konnte auch mit der transitiven Erweiterung erhalten werden. Die Gruppe PSL(2,11) operiert auf den Punkten des Higman-Designs in drei Bahnen; sucht man nach symmetrischen (176,50,14) Designs, auf denen diese Gruppe in zwei Bahnen operiert, so erhält man acht neue Designs. Die übrigen Gruppen lieferten keine neuen Designs. Schließlich konnte ein neues symmetrisches (144,66,30) Design unter Verwendung der sporadischen Mathieu-Gruppe M(12) konstruiert werden. Dies war zu diesem Zeitpunkt außer dem Higman-Design das einzige bekannte symmetrische Design, dessen volle Automorphismengruppe im Wesentlichen eine sporadische einfache Gruppe ist.
Resumo:
Zusammmenfassung:Um Phasenseparation in binären Polymermischungen zuuntersuchen, werden zwei dynamische Erweiterungen der selbstkonsistenten Feldtheorie (SCFT)entwickelt. Die erste Methode benutzt eine zeitliche Entwicklung der Dichten und wird dynamische selbstkonsistente Feldtheorie (DSCFT) genannt, während die zweite Methode die zeitliche Propagation der effektiven äußeren Felder der SCFT ausnutzt. Diese Methode wird mit External Potential Dynamics (EPD) bezeichnet. Für DSCFT werden kinetische Koeffizienten verwendet, die entweder die lokale Dynamik von Punktteilchen oder die nichtlokale Dynamik von Rouse'schen Polymeren nachbilden. Die EPD-Methode erzeugt mit einem konstanten kinetischen Koeffizienten die Dynamik von Rouse'schen Ketten und benötigt weniger Rechenzeit als DSCFT. Diese Methoden werden für verschiedene Systeme angewendet.Zuerst wird spinodale Entmischung im Volumen untersucht,wobei der Unterschied zwischen lokaler und nichtlokalerDynamik im Mittelpunkt steht. Um die Gültigkeit derErgebnisse zu überprüfen, werden Monte-Carlo-Simulationen durchgeführt. In Polymermischungen, die von zwei Wänden, die beide die gleiche Sorte Polymere bevorzugen, eingeschränkt werden, wird die Bildung von Anreicherungsschichten an den Wänden untersucht. Für dünne Polymerfilme zwischen antisymmetrischen Wänden, d.h. jede Wand bevorzugt eine andere Polymerspezies, wird die Spannung einer parallel zu den Wänden gebildeten Grenzfläche analysiert und der Phasenübergang von einer anfänglich homogenen Mischung zur lokalisierten Phase betrachtet. Des Weiteren wird die Dynamik von Kapillarwellenmoden untersucht.
Resumo:
Fagin zeigt in seiner bahnbrechenden Arbeit, dass die Komplexitätsklasse NP mit der logischen Sprache 'existentielle Logik zweiter Ordnung' identifiziert werden kann. Ein einfaches und daher greifbares Fragment dieser Sprache ist monadic NP. Fagin bezeichnet monadic NP als '...training ground for attacking the problems in their full generality'. In dieser Arbeit werden zwei Arten von monadischen Erweiterungen von monadic NP untersucht. Der erste Teil beschäftigt sich mit schwachen built-in Relationen.Einebuilt-in Relation B heißt schwach, falls: monadic NP + B + polynomielles Padding neq NP.Es werden zwei neue Klassen schwacher built-in Relationen (unendlich teilbare-und verpackbare built-in Relationen) eingeführt. Hauptergebnis dieses Teils ist eine Klassifizierung aller bekannten schwachen built-in Relationen mittels dieser beiden Klassen. Im zweiten Teil dieser Arbeit werden monadische Abschlüsse von monadic NP betrachtet. Besonderes Interesse gilt dabei den positiven Abschluss erster Ordnung von monadic NP (kurz: PFO(monNP)). Hauptergebnis dieses Teils ist die Aussage, dass nicht-k-Färbbarkeit (k=>3) nicht ausdrückbar ist in PFO(monNP).
Resumo:
Medulloblastoma (MB) is a paediatric malignant brain tumour, sensitive to ionizing radiations (IR). However radiotherapy has detrimental effects on long-term survivors and the tumour is incurable in a third of patients, due to intrinsic radioresistance. Alterations of the Wnt pathway distinguish a molecular subgroup of MBs and nuclear beta-catenin, indicative of activated Wnt, is associated with good outcome in MB. Therefore there are increasing evidences about Wnt involvement in radio-response: IR induce activation of Wnt signalling with nuclear translocation of beta-catenin in MB cell lines. We studied effects of Wnt pathway activation in a MB cell line with p53 wild-type: UW228-1. Cells were stably transfected with a beta-catenin constitutively active and assessed for growth curves, mortality rate, invasiveness and differentiation. Firstly, activation of Wnt pathway by itself induced a slower cell growth and a higher mortality. After IR treatment, nuclear beta-catenin further inhibited cell growth, increasing mortality. Cell invasiveness was strongly inhibited by Wnt activation. Furthermore, Wnt cell population was characterized by club shaped cells with long cytoplasmic extensions containing neurofilaments, suggesting a neural differentiation of this cell line. These findings suggest that nuclear beta-catenin may leads to a less aggressive phenotype and increases radio-sensitivity in MB, accounting for its favourable prognostic value. In the future, Wnt/beta-catenin signalling will be considered as a molecular therapeutic target to develop new drugs for the treatment of MB.