850 resultados para least median of squares
Resumo:
We report a measurement of the top quark mass, m_t, obtained from ppbar collisions at sqrt(s) = 1.96 TeV at the Fermilab Tevatron using the CDF II detector. We analyze a sample corresponding to an integrated luminosity of 1.9 fb^-1. We select events with an electron or muon, large missing transverse energy, and exactly four high-energy jets in the central region of the detector, at least one of which is tagged as coming from a b quark. We calculate a signal likelihood using a matrix element integration method, with effective propagators to take into account assumptions on event kinematics. Our event likelihood is a function of m_t and a parameter JES that determines /in situ/ the calibration of the jet energies. We use a neural network discriminant to distinguish signal from background events. We also apply a cut on the peak value of each event likelihood curve to reduce the contribution of background and badly reconstructed events. Using the 318 events that pass all selection criteria, we find m_t = 172.7 +/- 1.8 (stat. + JES) +/- 1.2 (syst.) GeV/c^2.
Resumo:
We report on a search for the supersymmetric partner of the bottom quark produced from gluino decays in data from 2.5 fb-1 of integrated luminosity collected by the Collider Detector at Fermilab at sqrt(s)=1.96 TeV. Candidate events are selected requiring two or more jets and large missing transverse energy. At least two of the jets are required to be tagged as originating from a b quark to enhance the sensitivity. The results are in good agreement with the prediction of the standard model processes, giving no evidence for gluino decay to sbottom quarks. This result constrains the gluino-pair-production cross section to be less than 40fb at 95% credibility level for a gluino mass of 350 GeV.
Resumo:
On the one hand this thesis attempts to develop and empirically test an ethically defensible theorization of the relationship between human resource management (HRM) and competitive advantage. The specific empirical evidence indicates that at least part of HRM's causal influence on employee performance may operate indirectly through a social architecture and then through psychological empowerment. However, in particular the evidence concerning a potential influence of HRM on organizational performance seems to put in question some of the rhetorics within the HRM research community. On the other hand, the thesis tries to explicate and defend a certain attitude towards the philosophically oriented debates within organization science. This involves suggestions as to how we should understand meaning, reference, truth, justification and knowledge. In this understanding it is not fruitful to see either the problems or the solutions to the problems of empirical social science as fundamentally philosophical ones. It is argued that the notorious problems of social science, in this thesis exemplified by research on HRM, can be seen as related to dynamic complexity in combination with both the ethical and pragmatic difficulty of ”laboratory-like-experiments”. Solutions … can only be sought by informed trials and errors depending on the perceived familiarity with the object(s) of research. The odds are against anybody who hopes for clearly adequate social scientific answers to more complex questions. Social science is in particular unlikely to arrive at largely accepted knowledge of the kind ”if we do this, then that will happen”, or even ”if we do this, then that is likely to happen”. One of the problems probably facing most of the social scientific research communities is to specify and agree upon the ”this ” and the ”that” and provide convincing evidence of how they are (causally) related. On most more complex questions the role of social science seems largely to remain that of contributing to a (critical) conversation, rather than to arrive at more generally accepted knowledge. This is ultimately what is both argued and, in a sense, demonstrated using research on the relationship between HRM and organizational performance as an example.
Resumo:
Landscape is shaped by natural environment and increasingly by human activity. In landscape ecology, the concept of landscape can be defined as a kilometre-scale mosaic formed by different land-use types. In Helsinki Metropolitan Region, the landscape change caused by urbanization has accelerated after the 1950s. Prior to that, the landscape of the region was mainly only shaped by agriculture. The goal of this study was in addition to describing the landscape change to discuss the factors impacting the landscape change and evaluate thelandscape ecological impacts of the change. Three study areas at different distances from Helsinki city centre were chosen in order to look at the landscape change. Study areas were Malmi, Espoo and Mäntsälä regions representing different parts of the urban-to-rural gradient in 1955, 1975, 1990 and 2009. Land-use of the maps was then digitized into five classes: agricultural lands, semi-natural grasslands, built areas, waters and others using GIS methods. First, landscape change was studied using landscape ecological indices. Indices used were PLAND i.e. the proportions of the different land-use types in the landscape; MPS, SHEI and SHDI which describe fragmentation and heterogeneity of the landscape; and MSI and ED which are measures of patch shape. Second, landscape change was studied statistically in relation to topography, soil and urban structure of the study areas. Indicators used concerning urban structure were number of residents, car ownership and travel-related zones of urban form which indicate the degree of urban sprawl within the study areas. For the statistical analyses, each of the 9.25 x 9.25 km sized study areas was further divided into grids with resolution of 0.25 x 0.25 kilometres. Third, the changes in the green structure of the study areas were evaluated. The landscape change reflected by the proportions of the land-use types was the most notable in Malmi area where a large amount of agricultural land was developed from 1955 to 2009. The proportion of semi-natural grasslands also showed an interesting pattern in relation to urbanization. When urbanization started, a great number of agricultural lands were abandoned and turned into semi-natural grasslands but as the urbanization accelerated, the number of semi-natural grasslands started to decline because of urban densification. Landscape fragmentation and heterogeneity were the most widespread in Espoo study area which is not only because of the great differences in relative heights within the region but also its location in the rural-urban fringe. According to the results, urbanization induced agricultural lands to be more regular in shape both spatially and temporally whereas for built areas and semi-natural grasslands the impact of urbanization was reverse. Changes in landscape were the most insignificant in the most rural study area Mäntsälä. In Mäntsälä, built area per resident showed the greatest values indicating a widespread urban sprawl. The values were the smallest in highly urbanized Malmi study area. Unlike other study areas, in Mäntsälä the proportion of developing land in the ecologically disadvantageous cardependent zone was on the increase. On the other hand, the green structure of the Mäntsälä study area was the most advantageous whereas Malmi study area showed the most ecologically disadvantageous structure. Considering all the landscape ecological criteria used, the landscape structure of Espoo study area proved to be the best not least because of the great heterogeneity of its landscape. Thus the study confirmed the previous results according to which landscape heterogeneity is the most significant in areas exposed to a moderate human impact.
Resumo:
This paper focuses on a new high-frequency (HF) link dc-to-three-phase-ac power converter. The least number of switching devices among other HF link dc-to-three-phase-ac converters, improved power density due to the absence of devices of bidirectional voltage-blocking capability, simple commutation requirements, and isolation between input and output are the integral features of this topology. The commutation process of the converter requires zero portions in the link voltage. This causes a nonlinear distortion in the output three-phase voltages. The mathematical analysis is carried out to investigate the problem, and suitable compensation in modulating signal is proposed for different types of carrier. Along with the modified modulator structure, a synchronously rotating reference-frame-based control scheme is adopted for the three-phase ac side in order to achieve high dynamic performance. The effectiveness of the proposed scheme has been investigated and verified through computer simulations and experimental results with 1-kVA prototype.
Resumo:
Present day power systems are growing in size and complexity of operation with inter connections to neighboring systems, introduction of large generating units, EHV 400/765 kV AC transmission systems, HVDC systems and more sophisticated control devices such as FACTS. For planning and operational studies, it requires suitable modeling of all components in the power system, as the number of HVDC systems and FACTS devices of different type are incorporated in the system. This paper presents reactive power optimization with three objectives to minimize the sum of the squares of the voltage deviations (ve) of the load buses, minimization of sum of squares of voltage stability L-indices of load buses (¿L2), and also the system real power loss (Ploss) minimization. The proposed methods have been tested on typical sample system. Results for Indian 96-bus equivalent system including HVDC terminal and UPFC under normal and contingency conditions are presented.
Resumo:
Regenerating codes are a class of recently developed codes for distributed storage that, like Reed-Solomon codes, permit data recovery from any arbitrary of nodes. However regenerating codes possess in addition, the ability to repair a failed node by connecting to any arbitrary nodes and downloading an amount of data that is typically far less than the size of the data file. This amount of download is termed the repair bandwidth. Minimum storage regenerating (MSR) codes are a subclass of regenerating codes that require the least amount of network storage; every such code is a maximum distance separable (MDS) code. Further, when a replacement node stores data identical to that in the failed node, the repair is termed as exact. The four principal results of the paper are (a) the explicit construction of a class of MDS codes for d = n - 1 >= 2k - 1 termed the MISER code, that achieves the cut-set bound on the repair bandwidth for the exact repair of systematic nodes, (b) proof of the necessity of interference alignment in exact-repair MSR codes, (c) a proof showing the impossibility of constructing linear, exact-repair MSR codes for d < 2k - 3 in the absence of symbol extension, and (d) the construction, also explicit, of high-rate MSR codes for d = k+1. Interference alignment (IA) is a theme that runs throughout the paper: the MISER code is built on the principles of IA and IA is also a crucial component to the nonexistence proof for d < 2k - 3. To the best of our knowledge, the constructions presented in this paper are the first explicit constructions of regenerating codes that achieve the cut-set bound.
Resumo:
We consider counterterms for odd dimensional holographic conformal field theories (CFTs). These counterterms are derived by demanding cutoff independence of the CFT partition function on S-d and S-1 x Sd-1. The same choice of counterterms leads to a cutoff independent Schwarzschild black hole entropy. When treated as independent actions, these counterterm actions resemble critical theories of gravity, i.e., higher curvature gravity theories where the additional massive spin-2 modes become massless. Equivalently, in the context of AdS/CFT, these are theories where at least one of the central charges associated with the trace anomaly vanishes. Connections between these theories and logarithmic CFTs are discussed. For a specific choice of parameters, the theories arising from counterterms are nondynamical and resemble a Dirac-Born-Infeld generalization of gravity. For even dimensional CFTs, analogous counterterms cancel log-independent cutoff dependence.
Resumo:
Multi-GPU machines are being increasingly used in high-performance computing. Each GPU in such a machine has its own memory and does not share the address space either with the host CPU or other GPUs. Hence, applications utilizing multiple GPUs have to manually allocate and manage data on each GPU. Existing works that propose to automate data allocations for GPUs have limitations and inefficiencies in terms of allocation sizes, exploiting reuse, transfer costs, and scalability. We propose a scalable and fully automatic data allocation and buffer management scheme for affine loop nests on multi-GPU machines. We call it the Bounding-Box-based Memory Manager (BBMM). BBMM can perform at runtime, during standard set operations like union, intersection, and difference, finding subset and superset relations on hyperrectangular regions of array data (bounding boxes). It uses these operations along with some compiler assistance to identify, allocate, and manage data required by applications in terms of disjoint bounding boxes. This allows it to (1) allocate exactly or nearly as much data as is required by computations running on each GPU, (2) efficiently track buffer allocations and hence maximize data reuse across tiles and minimize data transfer overhead, and (3) and as a result, maximize utilization of the combined memory on multi-GPU machines. BBMM can work with any choice of parallelizing transformations, computation placement, and scheduling schemes, whether static or dynamic. Experiments run on a four-GPU machine with various scientific programs showed that BBMM reduces data allocations on each GPU by up to 75% compared to current allocation schemes, yields performance of at least 88% of manually written code, and allows excellent weak scaling.
Resumo:
Liquid drops impacted on textured surfaces undergo a transition from the Cassie state characterized by the presence of air pockets inside the roughness valleys below the drop to an impaled state with at least one of the roughness valleys filled with drop liquid. This occurs when the drop impact velocity exceeds a particular value referred to as the critical impact velocity. The present study investigates such a transition process during water drop impact on surfaces textured with unidirectional parallel grooves referred to as groove-textured surfaces. The process of liquid impalement into a groove in the vicinity of drop impact through de-pinning of the three-phase contact line (TPCL) beneath the drop as well as the critical impact velocity were identified experimentally from high speed video recordings of water drop impact on six different groove-textured surfaces made from intrinsically hydrophilic (stainless steel) as well as intrinsically hydrophobic (PDMS and rough aluminum) materials. The surface energy of various 2-D configurations of liquid-vapor interface beneath the drop near the drop impact point was theoretically investigated to identify the locally stable configurations and establish a pathway for the liquid impalement process. A force balance analysis performed on the liquid-vapor interface configuration just prior to TPCL de-pinning provided an expression for the critical drop impact velocity, U-o,U-cr, beyond which the drop state transitions from the Cassie to an impaled state. The theoretical model predicts that Uo, cr increases with the increase in pillar side angle, a, and intrinsic hydrophobicity whereas it decreases with the increase in groove top width, w, of the groove-textured surface. The quantitative predictions of the theoretical model were found to show good agreement with the experimental measurements of U-o,U-cr plotted against the surface texture geometry factor in our model, {tan(alpha/2)/w}(0.5).
Resumo:
The potential of graphene oxide-Fe3O4 nanoparticle (GO-Fe3O4) composite as an image contrast enhancing material in magnetic resonance imaging has been investigated. Proton relaxivity values were obtained in three different homogeneous dispersions of GO-Fe3O4 composites synthesized by precipitating Fe3O4 nanoparticles in three different reaction mixtures containing 0.01 g, 0.1 g, and 0.2 g of graphene oxide. A noticeable difference in proton relaxivity values was observed between the three cases. A comprehensive structural and magnetic characterization revealed discrete differences in the extent of reduction of the graphene oxide and spacing between the graphene oxide sheets in the three composites. The GO-Fe3O4 composite framework that contained graphene oxide with least extent of reduction of the carboxyl groups and largest spacing between the graphene oxide sheets provided the optimum structure for yielding a very high transverse proton relaxivity value. It was found that the GO-Fe3O4 composites possessed good biocompatibility with normal cell lines, whereas they exhibited considerable toxicity towards breast cancer cells. (C) 2015 AIP Publishing LLC.
Resumo:
Contaminant behaviour in soils and fractured rock is very complex, not least because of the heterogeneity of the subsurface environment. For non-aqueous phase liquids (NAPLs), a liquid density contrast and interfacial tension between the contaminant and interstitial fluid adds to the complexity of behaviour, increasing the difficulty of predicting NAPL behaviour in the subsurface. This paper outlines the need for physical model tests that can improve fundamental understanding of NAPL behaviour in the subsurface, enhance risk assessments of NAPL contaminated sites, reduce uncertainty associated with NAPL source remediation and improve current technologies for NAPL plume remediation. Four case histories are presented to illustrate physical modelling approaches that have addressed problems associated with NAPL transport, remediation and source zone characterization. © 2006 Taylor & Francis Group, London.
Resumo:
We consider cooperation situations where players have network relations. Networks evolve according to a stationary transition probability matrix and at each moment in time players receive payoffs from a stationary allocation rule. Players discount the future by a common factor. The pair formed by an allocation rule and a transition probability matrix is called a forward-looking network formation scheme if, first, the probability that a link is created is positive if the discounted, expected gains to its two participants are positive, and if, second, the probability that a link is eliminated is positive if the discounted, expected gains to at least one of its two participants are positive. The main result is the existence, for all discount factors and all value functions, of a forward-looking network formation scheme. Furthermore, we can always nd a forward-looking network formation scheme such that (i) the allocation rule is component balanced and (ii) the transition probabilities increase in the di erence in payo s for the corresponding players responsible for the transition. We use this dynamic solution concept to explore the tension between e ciency and stability.
Resumo:
Support in R for state space estimation via Kalman filtering was limited to one package, until fairly recently. In the last five years, the situation has changed with no less than four additional packages offering general implementations of the Kalman filter, including in some cases smoothing, simulation smoothing and other functionality. This paper reviews some of the offerings in R to help the prospective user to make an informed choice.
Resumo:
Historical definitions of what determines whether one lives in a coastal area or not have varied over time. According to Culliton (1998), a “coastal county” is defined as a county with at least 15% of its total land area located within a nation’s coastal watershed. This emphasizes the land areas within which water flows into the ocean or Great Lakes, but may be better suited for ecosystems or water quality research (Crowell et al. 2007). Some Federal Emergency Management Agency (FEMA) documents suggest that “coastal” includes shoreline-adjacent coastal counties, and perhaps even counties impacted by flooding from coastal storms. An accurate definition of “coastal” is critical in this regard since FEMA uses such definitions to revise and modernize their Flood Insurance Rate Maps (Crowell et al. 2007). A recent map published by the National Oceanic and Atmospheric Administration’s (NOAA) Coastal Services Center for the Coastal Change Analysis Program shows that the “coastal” boundary covers the entire state of New York and Michigan, while nearly all of South Carolina is considered “coastal.” The definition of “coastal” one chooses can have major implications, including a simple count of coastal population and the influence of local or state coastal policies. There is, however, one aspect of defining what is “coastal” that has often been overlooked; using atmospheric long-term climate variables to define the inland extent of the coastal zone. This definition, which incorporates temperature, precipitation, wind speed, and relative humidity, is furthermore scalable and globally applicable - even in the face of shifting shorelines. A robust definition using common climate variables should condense the large broad definition often associated with “coastal” such that completely landlocked locations would no longer be considered “coastal.” Moreover, the resulting definition, “coastal climate” or “climatology of the coast”, will help coastal resource managers make better-informed decisions on a wide range of climatologically-influenced issues. The following sections outline the methodology employed to derive some new maps of coastal boundaries in the United States. (PDF contains 3 pages)