875 resultados para Boundaries of firms
Resumo:
Cloud Computing has evolved to become an enabler for delivering access to large scale distributed applications running on managed network-connected computing systems. This makes possible hosting Distributed Enterprise Information Systems (dEISs) in cloud environments, while enforcing strict performance and quality of service requirements, defined using Service Level Agreements (SLAs). {SLAs} define the performance boundaries of distributed applications, and are enforced by a cloud management system (CMS) dynamically allocating the available computing resources to the cloud services. We present two novel VM-scaling algorithms focused on dEIS systems, which optimally detect most appropriate scaling conditions using performance-models of distributed applications derived from constant-workload benchmarks, together with SLA-specified performance constraints. We simulate the VM-scaling algorithms in a cloud simulator and compare against trace-based performance models of dEISs. We compare a total of three SLA-based VM-scaling algorithms (one using prediction mechanisms) based on a real-world application scenario involving a large variable number of users. Our results show that it is beneficial to use autoregressive predictive SLA-driven scaling algorithms in cloud management systems for guaranteeing performance invariants of distributed cloud applications, as opposed to using only reactive SLA-based VM-scaling algorithms.
Resumo:
This thesis consists of four essays on the design and disclosure of compensation contracts. Essays 1, 2 and 3 focus on behavioral aspects of mandatory compensation disclosure rules and of contract negotiations in agency relationships. The three experimental studies develop psychology- based theory and present results that deviate from standard economic predictions. Furthermore, the results of Essay 1 and 2 also have implications for firms’ discretion in how to communicate their top management’s incentives to the capital market. Essay 4 analyzes the role of fairness perceptions for the evaluation of executive compensation. For this purpose, two surveys targeting representative eligible voters as well as investment professionals were conducted. Essay 1 investigates the role of the detailed ‘Compensation Discussion and Analysis’, which is part of the Security and Exchange Commission’s 2006 regulation, on investors’ evaluations of executive performance. Compensation disclosure complying with this regulation clarifies the relationship between realized reported compensation and the underlying performance measures and their target achievement levels. The experimental findings suggest that the salient presentation of executives’ incentives inherent in the ‘Compensation Discussion and Analysis’ makes investors’ performance evaluations less outcome dependent. Therefore, investors’ judgment and investment decisions might be less affected by noisy environmental factors that drive financial performance. The results also suggest that fairness perceptions of compensation contracts are essential for investors’ performance evaluations in that more transparent disclosure increases the perceived fairness of compensation and the performance evaluation of managers who are not responsible for a bad financial performance. These results have important practical implications as firms might choose to communicate their top management’s incentive compensation more transparently in order to benefit from less volatile expectations about their future performance. Similar to the first experiment, the experiment described in Essay 2 addresses the question of more transparent compensation disclosure. However, other than the first experiment, the second experiment does not analyze the effect of a more salient presentation of contract information but the informational effect of contract information itself. For this purpose, the experiment tests two conditions in which the assessment of the compensation contracts’ incentive compatibility, which determines executive effort, is either possible or not. On the one hand, the results suggest that the quality of investors’ expectations about executive effort is improved, but on the other hand investors might over-adjust their prior expectations about executive effort if being confronted with an unexpected financial performance and under-adjust if the financial performance confirms their prior expectations. Therefore, in the experiment, more transparent compensation disclosure does not lead to more correct overall judgments of executive effort and to even lower processing quality of outcome information. These results add to the literature on disclosure which predominantly advocates more transparency. The findings of the experiment however, identify decreased information processing quality as a relevant disclosure cost category. Firms might therefore carefully evaluate the additional costs and benefits of more transparent compensation disclosure. Together with the results from the experiment in Essay 1, the two experiments on compensation disclosure imply that firms should rather focus on their discretion how to present their compensation disclosure to benefit from investors’ improved fairness perceptions and their spill-over on performance evaluation. Essay 3 studies the behavioral effects of contextual factors in recruitment processes that do not affect the employer’s or the applicant’s bargaining power from a standard economic perspective. In particular, the experiment studies two common characteristics of recruitment processes: Pre-contractual competition among job applicants and job applicants’ non-binding effort announcements as they might be made during job interviews. Despite the standard economic irrelevance of these factors, the experiment develops theory regarding the behavioral effects on employees’ subsequent effort provision and the employers’ contract design choices. The experimental findings largely support the predictions. More specifically, the results suggest that firms can benefit from increased effort and, therefore, may generate higher profits. Further, firms may seize a larger share of the employment relationship’s profit by highlighting the competitive aspects of the recruitment process and by requiring applicants to make announcements about their future effort. Finally, Essay 4 studies the role of fairness perceptions for the public evaluation of executive compensation. Although economic criteria for the design of incentive compensation generally do not make restrictive recommendations with regard to the amount of compensation, fairness perceptions might be relevant from the perspective of firms and standard setters. This is because behavioral theory has identified fairness as an important determinant of individuals’ judgment and decisions. However, although fairness concerns about executive compensation are often stated in the popular media and even in the literature, evidence on the meaning of fairness in the context of executive compensation is scarce and ambiguous. In order to inform practitioners and standard setters whether fairness concerns are exclusive to non-professionals or relevant for investment professionals as well, the two surveys presented in Essay 4 aim to find commonalities in the opinions of representative eligible voters and investments professionals. The results suggest that fairness is an important criterion for both groups. Especially, exposure to risk in the form of the variable compensation share is an important criterion shared by both groups. The higher the assumed variable share, the higher is the compensation amount to be perceived as fair. However, to a large extent, opinions on executive compensation depend on personality characteristics, and to some extent, investment professionals’ perceptions deviate systematically from those of non-professionals. The findings imply that firms might benefit from emphasizing the riskiness of their managers’ variable pay components and, therefore, the findings are also in line with those of Essay 1.
Resumo:
The advent of single molecule fluorescence microscopy has allowed experimental molecular biophysics and biochemistry to transcend traditional ensemble measurements, where the behavior of individual proteins could not be precisely sampled. The recent explosion in popularity of new super-resolution and super-localization techniques coupled with technical advances in optical designs and fast highly sensitive cameras with single photon sensitivity and millisecond time resolution have made it possible to track key motions, reactions, and interactions of individual proteins with high temporal resolution and spatial resolution well beyond the diffraction limit. Within the purview of membrane proteins and ligand gated ion channels (LGICs), these outstanding advances in single molecule microscopy allow for the direct observation of discrete biochemical states and their fluctuation dynamics. Such observations are fundamentally important for understanding molecular-level mechanisms governing these systems. Examples reviewed here include the effects of allostery on the stoichiometry of ligand binding in the presence of fluorescent ligands; the observation of subdomain partitioning of membrane proteins due to microenvironment effects; and the use of single particle tracking experiments to elucidate characteristics of membrane protein diffusion and the direct measurement of thermodynamic properties, which govern the free energy landscape of protein dimerization. The review of such characteristic topics represents a snapshot of efforts to push the boundaries of fluorescence microscopy of membrane proteins to the absolute limit.
Resumo:
Simulating surface wind over complex terrain is a challenge in regional climate modelling. Therefore, this study aims at identifying a set-up of the Weather Research and Forecasting Model (WRF) model that minimises system- atic errors of surface winds in hindcast simulations. Major factors of the model configuration are tested to find a suitable set-up: the horizontal resolution, the planetary boundary layer (PBL) parameterisation scheme and the way the WRF is nested to the driving data set. Hence, a number of sensitivity simulations at a spatial resolution of 2 km are carried out and compared to observations. Given the importance of wind storms, the analysis is based on case studies of 24 historical wind storms that caused great economic damage in Switzerland. Each of these events is downscaled using eight different model set-ups, but sharing the same driving data set. The results show that the lack of representation of the unresolved topography leads to a general overestimation of wind speed in WRF. However, this bias can be substantially reduced by using a PBL scheme that explicitly considers the effects of non-resolved topography, which also improves the spatial structure of wind speed over Switzerland. The wind direction, although generally well reproduced, is not very sensitive to the PBL scheme. Further sensitivity tests include four types of nesting methods: nesting only at the boundaries of the outermost domain, analysis nudging, spectral nudging, and the so-called re-forecast method, where the simulation is frequently restarted. These simulations show that restricting the freedom of the model to develop large-scale disturbances slightly increases the temporal agreement with the observations, at the same time that it further reduces the overestimation of wind speed, especially for maximum wind peaks. The model performance is also evaluated in the outermost domains, where the resolution is coarser. The results demonstrate the important role of horizontal resolution, where the step from 6 to 2 km significantly improves model performance. In summary, the combination of a grid size of 2 km, the non-local PBL scheme modified to explicitly account for non-resolved orography, as well as analysis or spectral nudging, is a superior combination when dynamical downscaling is aimed at reproducing real wind fields.
Resumo:
Southern Switzerland is a fire prone area where fire has to be considered as a natural environmental factor. In the past decades, fire frequency has tended to increase due to changes in landscape management. The most common type of fire is surface fire which normally breaks out during the vegetation resting period. Usually this type of fire shows short residence time (rapid spread), low to medium fire intensity and limited size. South-facing slopes are particularly fire-prone, so that very high fire frequency is possible: under these conditions passive resistant species and postfire resprouting species are favoured, usually leading to a reduction in the number of surviving species to a few fire adapted sprouters. Evergreen broadleaves are extremely sensitive to repeated fires. A simulation of the potential vegetation of southern Switzerland under climatic changed conditions evidenced the coincidence of the potential area of spreading forests rich in evergreen broad-leaved species with the most fire-prone area of the region. Therefore, in future, wildfires could play an important regulating role: most probably they will not stop the large-scale laurophyllisation of the thermophilous forests of southern Switzerland, but at sites with high fire frequency the vegetation shift could be slowed or even prevented by fire-disturbances.
Resumo:
Cells infected with a temperature sensitive phenotypic mutant of Moloney sarcoma virus (MuSVts110) exhibit a transformed phenotype at 33('(DEGREES)) and synthesize two virus specific proteins, p85('gag-mos), a gag-mos fusion protein and p58('gag), a truncated gag precursor protein (the gag gene codes for viral structural proteins and mos is the MuSV transforming gene). At 39('(DEGREES)) only p58('gag) is synthesized and the morphology of the cells is similar to uninfected NRK parental cells. Two MuSVts110 specific RNAs are made in MuSVts110-infected cells, one of 4.0 kb in length, the other of 3.5 kb. Previous work indicated that each of these RNAs arose by a single central deletion of parental MuSV genetic material, and that p58('gag) was made by the 4.0 kb RNA and p85('gag-mos) from the 3.5 kb RNA. The objective of my dissertation research was to map precisely the deletion boundaries of both of the MuSVts110 RNAs, and to determine the proper reading frame across both deletion borders. This work succeeded in arriving at the following conclusions: (a) Using S-1 nuclease analysis and primer extension sequencing, it was found that the 4.0 kb MuSVts110 RNA arose by a 1488 base deletion of 5.2 kb parental MuSV genomic RNA. This deletion resulted in an out of frame fusion of the gag and mos genes that resulted in the formation of a "stop" codon which causes termination of translation just beyond the c-terminus of the gag region. Thus, this RNA can only be translated into the truncated gag protein p58('gag). (b) S-1 analysis of RNA from cells cultivated at different temperatures demonstrated that the 4.0 kb RNA was synthesized at all temperatures but that synthesis of the 3.5 kb RNA was temperature sensitive. These observations supported the data derived from blot hybridization experiments the interpretation of which argued for the existence of a single provirus in MuSVts110 infected cells, and hence only a single primary transcript (the 4.0 kb RNA). (c) Analyses similar to those described in (a) above showed that the 3.5 kb RNA was derived from the 4.0 kb MuSVts110 RNA by a further deletion of 431 bases, fusing the gag and mos genes into a continuous reading frame capable of directing synthesis of the p85('gag-mos) protein. These sequence data and the presence of only one MuSVts110-specific provirus, indicate that a splice mechanism is employed to generate the 3.5 kb RNA since the gag and mos genes are observed to be fused in frame in this RNA. . . . (Author's abstract exceeds stipulated maximum length. Discontinued here with permission of author.) UMI ^
Resumo:
Social capital, a relatively new public health concept, represents the intangible resources embedded in social relationships that facilitate collective action. Current interest in the concept stems from empirical studies linking social capital with health outcomes. However, in order for social capital to function as a meaningful research variable, conceptual development aimed at refining the domains, attributes, and boundaries of the concept are needed. An existing framework of social capital (Uphoff, 2000), developed from studies in India, was selected for congruence with the inductive analysis of pilot data from a community that was unsuccessful at mobilizing collective action. This framework provided the underpinnings for a formal ethnographic research study designed to examine the components of social capital in a community that had successfully mobilized collective action. The specific aim of the ethnographic study was to examine the fittingness of Uphoff's framework in the contrasting American community. A contrasting context was purposefully selected to distinguish essential attributes of social capital from those that were specific to one community. Ethnographic data collection methods included participant observation, formal interviews, and public documents. Data was originally analyzed according to codes developed from Uphoff's theoretical framework. The results from this analysis were only partially satisfactory, indicating that the theoretical framework required refinement. The refinement of the coding system resulted in the emergence of an explanatory theory of social capital that was tested with the data collected from formal fieldwork. Although Uphoff's framework was useful, the refinement of the framework revealed, (1) trust as the dominant attribute of social capital, (2) efficacy of mutually beneficial collective action as the outcome indicator, (3) cognitive and structural domains more appropriately defined as the cultural norms of the community and group, and (4) a definition of social capital as the combination of the cognitive norms of the community and the structural norms of the group that are either constructive or destructive to the development of trust and the efficacy of mutually beneficial collective action. This explanatory framework holds increased pragmatic utility for public health practice and research. ^
Resumo:
This paper examines whether the presence of informal credit markets reduces the cost of credit rationing in terms of growth. In a dynamic general equilibrium framework, we assume that firms are heterogenous with different degrees of risk and households invest in human capital development. With the help of Indian household level data we show that the informal market reduces the cost of rationing by increasing the growth rate by 0.7 percent. This higher growth rate, in the presence of an informal sector, is due to the ability of the informal market to separate the high risk from the low risk firms thanks to better information. But even after such improvement we do not get the optimum outcome. The findings, based on our second question, suggest that the revelation of firms' type, based on incentive compatible pricing, can lead to almost 2 percent higher growth rate as compared to the credit rationing regime with informal sector.
Resumo:
Transglutaminases are a family of enzymes that catalyze the covalent cross-linking of proteins through the formation of $\varepsilon$-($\gamma$-glutaminyl)-lysyl isopeptide bonds. Tissue transglutaminase (Tgase) is an intracellular enzyme which is expressed in terminally differentiated and senescent cells and also in cells undergoing apoptotic cell death. To characterize this enzyme and examine its relationship with other members of the transglutaminase family, cDNAs, the first two exons of the gene and 2 kb of the 5$\sp\prime$ flanking region, including the promoter, were isolated. The full length Tgase transcript consists of 66 bp of 5$\sp\prime$-UTR (untranslated) sequence, an open reading frame which encodes 686 amino acids and 1400 bp of 3$\sp\prime$-UTR sequence. Alignment of the deduced Tgase protein sequence with that of other transglutaminases revealed regions of strong homology, particularly in the active site region.^ The Tgase cDNA was used to isolate and characterize a genomic clone encompassing the 5$\sp\prime$ end of the mouse Tgase gene. The transcription start site was defined using genomic and cDNA clones coupled with S1 protection analysis and anchored PCR. This clone includes 2.3 kb upstream of the transcription start site and two exons that contain the first 256 nucleotides of the mouse Tgase cDNA sequence. The exon intron boundaries have been mapped and compared with the exon intron boundaries of three members of the transglutaminase family: human factor XIIIa, the human keratinocyte transglutaminase and human erythrocyte band 4.1. Tissue Tgase exon II is similar to comparable exons of these genes. However, exon I bears no resemblance with any of the other transglutaminase amino terminus exons.^ Previous work in our laboratory has shown that the transcription of the Tgase gene is directly controlled by retinoic acid and retinoic acid receptors. To identify the region of the Tgase gene responsible for regulating its expression, fragments of the Tgase promoter and 5$\sp\prime$-flanking region were cloned into the chloramphenicol actetyl transferase (CAT) reporter constructs. Transient transfection experiments with these constructs demonstrated that the upstream region of Tgase is a functional promoter which contains a retinoid response element within a 1573 nucleotide region spanning nucleotides $-$252 to $-$1825. ^
Resumo:
The Byrd Glacier discontinuity us a major boundary crossing the Ross Orogen, with crystalline rocks to the north and primarily sedimentary rocks to the south. Most models for the tectonic development of the Ross Orogen in the central Transantarctic Mountains consits of two-dimensional transects across the belt, but do not adress the major longitudinal contrast at Byrd Glacier. This paper presents a tectonic model centering on the Byrd Glacier discontinuity. Rifting in the Neoproterozoic producede a crustal promontory in the craton margin to the north of Byrd Glacier. Oblique convergence of the terrane (Beardmore microcontinent) during the latest Neroproterozoic and Early Cambrian was accompanied by subduction along the craton margin of East Antarctica. New data presented herein in the support of this hypothesis are U-Pb dates of 545.7 ± 6.8 Ma and 531.0 ± 7.5 Ma on plutonic rocks from the Britannia Range, subduction stepped out, and Byrd Glacier. After docking of the terrane, subduction stepped out, and Byrd Group was deposited during the Atdabanian-Botomian across the inner margin of the terrane. Beginning in the upper Botomian, reactivation of the sutured boundaries of the terrane resulted in an outpouring of clastic sediment and folding and faulting of the Byrd Group.
Resumo:
Investigation of the ferromagnetic fraction of sediments from the Brazil Basin and Rio Grande Rise shows that its main constituents are magnetite and hematite. The magnetite is detrital, but the hematite is both detrital and chemical in origin. Magnetite is the main carrier of the natural remanent magnetization (NRM); therefore, the NRM is detrital remanent magnetization (DRM). In a number of cases, the change of magnetic parameters along the stratigraphic column permits some refinement of the previously defined boundaries of the lithologic units.
Resumo:
In this study, we present grain-size distributions of the terrigenous fraction of two deep-sea sediment cores from the SE Atlantic (offshore Namibia) and from the SE Pacific (offshore northern Chile), which we 'unmix' into subpopulations and which are interpreted as coarse eolian dust, fine eolian dust, and fluvial mud. The downcore ratios of the proportions of eolian dust and fluvial mud subsequently represent paleocontinental aridity records of southwestern Africa and northern Chile for the last 120,000 yr. The two records show a relatively wet Last Glacial Maximum (LGM) compared to a relatively dry Holocene, but different orbital variability on longer time scales. Generally, the northern Chilean aridity record shows higher-frequency changes, which are closely related to precessional variation in solar insolation, compared to the southwestern African aridity record, which shows a remarkable resemblance to the global ice-volume record. We relate the changes in continental aridity in southwestern Africa and northern Chile to changes in the latitudinal position of the moisture-bearing Southern Westerlies, potentially driven by the sea-ice extent around Antarctica and overprinted by tropical forcing in the equatorial Pacific Ocean.
Resumo:
Slowslip forms part of the spectrum of fault behaviour between stable creep and destructive earthquakes. Slow slip occurs near the boundaries of large earthquake rupture zones and may sometimes trigger fast earthquakes. It is thought to occur in faults comprised of rocks that strengthen under fast slip rates, preventing rupture as a normal earthquake, or on faults that have elevated pore-fluid pressures. However, the processes that control slow rupture and the relationship between slow and normal earthquakes are enigmatic. Here we use laboratory experiments to simulate faulting in natural rock samples taken from shallow parts of the Nankai subduction zone, Japan, where very low-frequency earthquakes - a form of slow slip - have been observed.We find that the fault rocks exhibit decreasing strength over millimetre-scale slip distances rather than weakening due to increasing velocity. However, the sizes of the slip nucleation patches in our laboratory simulations are similar to those expected for the very lowfrequency earthquakes observed in Nankai. We therefore suggest that this type of fault-weakening behaviour may generate slow earthquakes. Owing to the similarity between the expected behaviour of slow earthquakes based on our data, and that of normal earthquakes during nucleation, we suggest that some types of slow slip may represent prematurely arrested earthquakes.
Resumo:
Tectonic structure and anomalous distributions of geophysical fields of the Sea of Okhotsk region are considered; the lack of reliable data on age of the lithosphere beneath basins of various origin in the Sea of Okhotsk is noted. Model calculations based on geological and geophysical data yielded 65 Ma (Cretaceous-Paleocene boundary) age for the Central Okhotsk rise underlain by the continental lithosphere. This estimate agrees with the age (the end of Cretaceous) derived from seismostratigraphic data. A comparative analysis of theoretical and measured heat flows in the Akademii Nauk Rise, underlain by the thinned continental crust, is performed. The analysis points to a higher (by 20%) value of the measured thermal background of the rise, which is consistent with high negative gradient of gravity anomalies in this area. Calculations yielded 36 Ma (Early Oligocene) age and lithosphere thickness of 50 km for the South Okhotsk depression, whose seafloor was formed by processes of back-arc spreading. The estimated age of the depression is supported by kinematic data on the region; the calculated thickness of the lithosphere coincides with the value estimated from data of magnetotelluric sounding here. This indicates that formation time (36 Ma) of the South Okhotsk depression was estimated correctly. Numerical modeling performed for determination of the basement age of rifting basins in the Sea of Okhotsk gave the following estimates: 18 Ma (Early Miocene) for the Deryugin Basin, 12 Ma (Middle Miocene) for the TINRO Basin, and 23 Ma (Late Oligocene) for the West Kamchatka Trough. These estimates agree with formation time (Oligocene-Quaternary) of the sedimentary cover in rifting basins of the Sea of Okhotsk derived from geological and geophysical data. Model temperature estimates are obtained for lithologic and stratigraphic boundaries of the sedimentary cover in the Deryugin and TINRO Basins and the West Kamchatka Trough; the temperature analysis indicates that the latter two structures are promising for oil and hydrocarbon gas generation; the West Kamchatka Trough possesses better reservoir properties compared to the TINRO and Deryugin Basins. The latter is promising for generation of hydrocarbon gas. Paleogeodynamic reconstructions of the Sea of Okhotsk region evolution are obtained for times of 90, 66, and 36 Ma on the base of kinematic, geomagnetic, structural, tectonic, geothermal, and other geological and geophysical data.
Resumo:
Neogene stratigraphy of the tropical and subtropical Pacific on radiolaria is studied in the book. A detailed comparison of coeval systems from tropics and subtropics is given. A possibility of use of a uniform zonal scale in these areas is proved. Magnitude of changes of complexes on borders of Neogene zones is studied in detail. Six stages in development of radiolarians are identified in the tropics in Neogene. Stratigraphic levels, where the greatest changes of fauna occurred, are natural boundaries of these stages. 72 species of radiolarians (two of which are new) are described in the book.