980 resultados para employing
Resumo:
An unlisted property fund is a private investment vehicle which aims to provide direct property total returns and may also employ financial leverage which will accentuate performance. They have become a far more prevalent institutional property investment conduit since the early 2000’s. Investors have been primarily attracted to them due to the ease of executing a property exposure, both domestically and internationally, and for their diversification benefits given the capital intensive nature of constructing a well diversified commercial property investment portfolio. However, despite their greater prominence there has been little academic research conducted on the performance and risks of unlisted property fund investments. This can be attributed to a paucity of available data and limited time series where it exists. In this study we have made use of a unique dataset of institutional UK unlisted non-listed property funds over the period 2003Q4 to 2011Q4, using a panel modelling framework in order to determine the key factors which impact on fund performance. The sample provided a rich set of unlisted property fund factors including market exposures, direct property characteristics and the level of financial leverage employed. The findings from the panel regression analysis show that a small number of variables are able to account for the performance of unlisted property funds. These variables should be considered by investors when assessing the risk and return of these vehicles. The impact of financial leverage upon the performance of these vehicles through the recent global financial crisis and subsequent UK commercial property market downturn was also studied. The findings indicate a significant asymmetric effect of employing debt finance within unlisted property funds.
Resumo:
Relating system dynamics to the broad systems movement, the key notion is that reinforcing loops deserve no less attention than balancing loops. Three specific propositions follow. First, since reinforcing loops arise in surprising places, investigations of complex systems must consider their possible existence and potential impact. Second, because the strength of reinforcing loops can be misinferred - we include an example from the field of servomechanisms - computer simulation can be essential. Be it project management, corporate growth or inventory oscillation, simulation helps to assess consequences of reinforcing loops and options for interventions. Third, in social systems the consequences of reinforcing loops are not inevitable. Examples concerning globalization illustrate how difficult it might be to challenge such assumptions. However, system dynamics and ideas from contemporary social theory help to show that even the most complex social systems are, in principle, subject to human influence. In conclusion, by employing these ideas, by attending to reinforcing as well as balancing loops, system dynamics work can improve the understanding of social systems and illuminate our choices when attempting to steer them.
Resumo:
Development of the patch clamp technique by the Nobel Prize winners Bert Sakmann and Erwin Neher led to huge advances in ion channel research. Their work laid the foundations and revolutionized electrophysiological studies of cells and ion channels. These ion channels underlie many basic cellular physiological processes and, therefore, are key therapeutic targets for pharmaceutical companies. However, current pharmacological strategies are hampered by the lack of specific ion channel blockers. Intense research and development programs are now actively employing antibodies to target ion channels in various formats. This review discusses the use of ion channel antibodies and their associated small molecules as pharmacological tools, termed immunopharmacology. In addition, we will review some recent studies looking into clinical applications of immunopharmacology and intrabodies.
Resumo:
A straightforward one-step method for the N-methylthiomethylation of benzimidazoles has been developed employing DMSO as a solvent and as a reagent. This methodology has been applied for the synthesis of diverse N-methylthiomethyl derivatives of benzimidazoles. The products can be chemoselectively oxidized to the corresponding sulfoxides with NaBiO3 in acetic acid. Both the N-methylthiomethyl derivatives of benzimidazoles and their corresponding sulfoxides are important medicinal scaffolds.
Resumo:
In the mid-1970s it was recognized that, as well as being substances that deplete stratospheric ozone, chlorofluorocarbons (CFCs) were strong greenhouse gases that could have substantial impacts on radiative forcing of climate change. Around a decade later, this group of radiatively active compounds was expanded to include a large number of replacements for ozone-depleting substances such as chlorocarbons, hydrochlorocarbons, hydrochlorofluorocarbons (HCFCs), hydrofluorocarbons (HFCs), perfluorocarbons (PFCs), bromofluorocarbons, and bromochlorofluorocarbons. This paper systematically reviews the published literature concerning the radiative efficiencies (REs) of CFCs, bromofluorocarbons and bromochlorofluorocarbons (halons), HCFCs, HFCs, PFCs, SF6, NF3, and related halogen containing compounds. In addition we provide a comprehensive and self-consistent set of new calculations of REs and global warming potentials (GWPs) for these compounds, mostly employing atmospheric lifetimes taken from the available literature. We also present Global Temperature change Potentials (GTPs) for selected gases. Infrared absorption spectra used in the RE calculations were taken from databases and individual studies, and from experimental and ab initio computational studies. Evaluations of REs and GWPs are presented for more than 200 compounds. Our calculations yield REs significantly (> 5%) different from those in the Intergovernmental Panel on Climate Change Fourth Assessment Report (AR4) for 49 compounds. We present new RE values for more than 100 gases which were not included in AR4. A widely-used simple method to calculate REs and GWPs from absorption spectra and atmospheric lifetimes is assessed and updated. This is the most comprehensive review of the radiative efficiencies and global warming potentials of halogenated compounds performed to date.
Resumo:
Design support for typeface design: collaborative work commissioned by Adobe, Inc. Published 2011. The original Bickham typeface was based on the hands of the 18th century writing master George Bickham. The ornate script represented the apogee of the art of formal writing with a steel nib, and defined the visual style for decorated, formal documents. In 2010 Adobe revised and extended the typeface, with the express purpose of making it a showcase for OpenType technology, demonstrating the visual importance of using different glyph forms in different contexts, employing contextual substitution rules. Although Bickham had published a single example of a Greek style, it was a standalone exercise, never intended to match the Latin. The key challenge was to identify historical records for appropriate Greek writing, preferably by writers familiar with the language, adapt them for digital typography and the particularities of contextual substitution, in a manner that would not make the Greek a ‘second-class citizen’. Research involved uncovering and analysing appropriate contemporary and later writing examples to identify both the range of writing styles of the period, and the manner of joining letters in written Greek with both pointed pens and broad nibs. This work was essential to make up for the comparative lack of relevant material by Bickham, as well as investigating the possible range of stylistic variants that were approved for the final typeface, which attempted to emulate a written texture through complex substitutions. This aspect of the work is highly original for implementing a substantial number of contextual alternates and ligatures. These were reviewed in the context of use, bringing together an analysis of occurring letter combinations and patterns, and the design of stylistic alternates to imitate natural handwriting.
Resumo:
A quasi-optical interferometric technique capable of measuring antenna phase patterns without the need for a heterodyne receiver is presented. It is particularly suited to the characterization of terahertz antennas feeding power detectors or mixers employing quasi-optical local oscillator injection. Examples of recorded antenna phase patterns at frequencies of 1.4 and 2.5 THz using homodyne detectors are presented. To our knowledge, these are the highest frequency antenna phase patterns ever recovered. Knowledge of both the amplitude and phase patterns in the far field enable a Gauss-Hermite or Gauss-Laguerre beam-mode analysis to be carried out for the antenna, of importance in performance optimization calculations, such as antenna gain and beam efficiency parameters at the design and prototype stage of antenna development. A full description of the beam would also be required if the antenna is to be used to feed a quasi-optical system in the near-field to far-field transition region. This situation could often arise when the device is fitted directly at the back of telescopes in flying observatories. A further benefit of the proposed technique is simplicity for characterizing systems in situ, an advantage of considerable importance as in many situations, the components may not be removable for further characterization once assembled. The proposed methodology is generic and should be useful across the wider sensing community, e.g., in single detector acoustic imaging or in adaptive imaging array applications. Furthermore, it is applicable across other frequencies of the EM spectrum, provided adequate spatial and temporal phase stability of the source can be maintained throughout the measurement process. Phase information retrieval is also of importance to emergent research areas, such as band-gap structure characterization, meta-materials research, electromagnetic cloaking, slow light, super-lens design as well as near-field and virtual imaging applications.
Resumo:
Simulations from eleven coupled chemistry-climate models (CCMs) employing nearly identical forcings have been used to project the evolution of stratospheric ozone throughout the 21st century. The model-to-model agreement in projected temperature trends is good, and all CCMs predict continued, global mean cooling of the stratosphere over the next 5 decades, increasing from around 0.25 K/decade at 50 hPa to around 1 K/ decade at 1 hPa under the Intergovernmental Panel on Climate Change (IPCC) Special Report on Emissions Scenarios (SRES) A1B scenario. In general, the simulated ozone evolution is mainly determined by decreases in halogen concentrations and continued cooling of the global stratosphere due to increases in greenhouse gases (GHGs). Column ozone is projected to increase as stratospheric halogen concentrations return to 1980s levels. Because of ozone increases in the middle and upper stratosphere due to GHGinduced cooling, total ozone averaged over midlatitudes, outside the polar regions, and globally, is projected to increase to 1980 values between 2035 and 2050 and before lower stratospheric halogen amounts decrease to 1980 values. In the polar regions the CCMs simulate small temperature trends in the first and second half of the 21st century in midwinter. Differences in stratospheric inorganic chlorine (Cly) among the CCMs are key to diagnosing the intermodel differences in simulated ozone recovery, in particular in the Antarctic. It is found that there are substantial quantitative differences in the simulated Cly, with the October mean Antarctic Cly peak value varying from less than 2 ppb to over 3.5 ppb in the CCMs, and the date at which the Cly returns to 1980 values varying from before 2030 to after 2050. There is a similar variation in the timing of recovery of Antarctic springtime column ozone back to 1980 values. As most models underestimate peak Cly near 2000, ozone recovery in the Antarctic could occur even later, between 2060 and 2070. In the Arctic the column ozone increase in spring does not follow halogen decreases as closely as in the Antarctic, reaching 1980 values before Arctic halogen amounts decrease to 1980 values and before the Antarctic. None of the CCMs predict future large decreases in the Arctic column ozone. By 2100, total column ozone is projected to be substantially above 1980 values in all regions except in the tropics.
Resumo:
Background: Association mapping, initially developed in human disease genetics, is now being applied to plant species. The model species Arabidopsis provided some of the first examples of association mapping in plants, identifying previously cloned flowering time genes, despite high population sub-structure. More recently, association genetics has been applied to barley, where breeding activity has resulted in a high degree of population sub-structure. A major genotypic division within barley is that between winter- and spring-sown varieties, which differ in their requirement for vernalization to promote subsequent flowering. To date, all attempts to validate association genetics in barley by identifying major flowering time loci that control vernalization requirement (VRN-H1 and VRN-H2) have failed. Here, we validate the use of association genetics in barley by identifying VRN-H1 and VRN-H2, despite their prominent role in determining population sub-structure. Results: By taking barley as a typical inbreeding crop, and seasonal growth habit as a major partitioning phenotype, we develop an association mapping approach which successfully identifies VRN-H1 and VRN-H2, the underlying loci largely responsible for this agronomic division. We find a combination of Structured Association followed by Genomic Control to correct for population structure and inflation of the test statistic, resolved significant associations only with VRN-H1 and the VRN-H2 candidate genes, as well as two genes closely linked to VRN-H1 (HvCSFs1 and HvPHYC). Conclusion: We show that, after employing appropriate statistical methods to correct for population sub-structure, the genome-wide partitioning effect of allelic status at VRN-H1 and VRN-H2 does not result in the high levels of spurious association expected to occur in highly structured samples. Furthermore, we demonstrate that both VRN-H1 and the candidate VRN-H2 genes can be identified using association mapping. Discrimination between intragenic VRN-H1 markers was achieved, indicating that candidate causative polymorphisms may be discerned and prioritised within a larger set of positive associations. This proof of concept study demonstrates the feasibility of association mapping in barley, even within highly structured populations. A major advantage of this method is that it does not require large numbers of genome-wide markers, and is therefore suitable for fine mapping and candidate gene evaluation, especially in species for which large numbers of genetic markers are either unavailable or too costly.
Resumo:
Anthropogenic midden deposits are remarkably well preserved at the Neolithic settlement of atalhöyük and provide significant archaeological information on the types and nature of activities occurring at the site. To decipher their complex stratigraphy and to investigate formation processes, a combination of geoarchaeological techniques was used. Deposits were investigated from the early ceramic to late Neolithic levels, targeting continuous sequences to examine high resolution and broader scale changes in deposition. Thin-section micromorphology combined with targeted phytolith and geochemical analyses indicates they are composed of a diverse range of ashes and other charred and siliceous plant materials, with inputs of decayed plants and organic matter, fecal waste, and sedimentary aggregates, each with diverse depositional pathways. Activities identified include in situ burning, with a range of different fuel types that may be associated with different activities. The complexity and heterogeneity of the midden deposits, and thus the necessity of employing an integrated microstratigraphic approach is demonstrated, as a prerequisite for cultural and palaeoenvironmental reconstructions.
Resumo:
The problem of symmetric stability is examined within the context of the direct Liapunov method. The sufficient conditions for stability derived by Fjørtoft are shown to imply finite-amplitude, normed stability. This finite-amplitude stability theorem is then used to obtain rigorous upper bounds on the saturation amplitude of disturbances to symmetrically unstable flows.By employing a virial functional, the necessary conditions for instability implied by the stability theorem are shown to be in fact sufficient for instability. The results of Ooyama are improved upon insofar as a tight two-sided (upper and lower) estimate is obtained of the growth rate of (modal or nonmodal) symmetric instabilities.The case of moist adiabatic systems is also considered.
Resumo:
For a Lévy process ξ=(ξt)t≥0 drifting to −∞, we define the so-called exponential functional as follows: Formula Under mild conditions on ξ, we show that the following factorization of exponential functionals: Formula holds, where × stands for the product of independent random variables, H− is the descending ladder height process of ξ and Y is a spectrally positive Lévy process with a negative mean constructed from its ascending ladder height process. As a by-product, we generate an integral or power series representation for the law of Iξ for a large class of Lévy processes with two-sided jumps and also derive some new distributional properties. The proof of our main result relies on a fine Markovian study of a class of generalized Ornstein–Uhlenbeck processes, which is itself of independent interest. We use and refine an alternative approach of studying the stationary measure of a Markov process which avoids some technicalities and difficulties that appear in the classical method of employing the generator of the dual Markov process.
Resumo:
It has been argued that colloquial dialects of Brazilian Portuguese (BP) have undergone significant linguistic change resulting in the loss of inflected infinitives (e.g., Pires, 2002, 2006). Since BP adults, at least educated ones, have complete knowledge of inflected infinitives, the implicit claim is that they are transmitted via formal education in the standard dialect. In the present article, I test one of the latent predictions of such claims; namely, the fact that heritage speakers of BP who lack formal education in the standard dialect should never develop native-like knowledge of inflected infinitives. In doing so, I highlight two significant implications (a) that heritage speaker grammars are a good source for testing dialectal variation and language change proposals and (b) incomplete acquisition and/or attrition are not the only sources of heritage language competence differences. Employing the syntactic and semantic tests of Rothman and Iverson (2007), I compare heritage speakers' knowledge to Rothman and Iverson's advanced adult L2 learners and educated native controls. Unlike the latter groups, the data for heritage speakers indicate that they do not have target knowledge of inflected infinitives, lending support to Pires' claims, suggesting that literacy plays a significant role in the acquisition of this grammatical property in BP.
Resumo:
Two aircraft instruments for the measurement of total odd nitrogen (NOy) were compared side by side aboard a Learjet A35 in April 2003 during a campaign of the AFO2000 project SPURT (Spurengastransport in der Tropopausenregion). The instruments albeit employing the same measurement principle (gold converter and chemiluminescence) had different inlet configurations. The ECO-Physics instrument operated by ETH-Zürich in SPURT had the gold converter mounted outside the aircraft, whereas the instrument operated by FZ-Jülich in the European project MOZAIC III (Measurements of ozone, water vapour, carbon monoxide and nitrogen oxides aboard Airbus A340 in-service aircraft) employed a Rosemount probe with 80 cm of FEP-tubing connecting the inlet to the gold converter. The NOy concentrations during the flight ranged between 0.3 and 3 ppb. The two data sets were compared in a blind fashion and each team followed its normal operating procedures. On average, the measurements agreed within 7%, i.e. within the combined uncertainty of the two instruments. This puts an upper limit on potential losses of HNO3 in the Rosemount inlet of the MOZAIC instrument. Larger transient deviations were observed during periods after calibrations and when the aircraft entered the stratosphere. The time lag of the MOZAIC instrument observed in these instances is in accordance with the time constant of the MOZAIC inlet line determined in the laboratory for HNO3.
Cross-layer design for MIMO systems over spatially correlated and keyhole Nakagami-m fading channels
Resumo:
Cross-layer design is a generic designation for a set of efficient adaptive transmission schemes, across multiple layers of the protocol stack, that are aimed at enhancing the spectral efficiency and increasing the transmission reliability of wireless communication systems. In this paper, one such cross-layer design scheme that combines physical layer adaptive modulation and coding (AMC) with link layer truncated automatic repeat request (T-ARQ) is proposed for multiple-input multiple-output (MIMO) systems employing orthogonal space--time block coding (OSTBC). The performance of the proposed cross-layer design is evaluated in terms of achievable average spectral efficiency (ASE), average packet loss rate (PLR) and outage probability, for which analytical expressions are derived, considering transmission over two types of MIMO fading channels, namely, spatially correlated Nakagami-m fading channels and keyhole Nakagami-m fading channels. Furthermore, the effects of the maximum number of ARQ retransmissions, numbers of transmit and receive antennas, Nakagami fading parameter and spatial correlation parameters, are studied and discussed based on numerical results and comparisons. Copyright © 2009 John Wiley & Sons, Ltd.