943 resultados para Repeated Averages of Real-Valued Functions
Resumo:
This manual contains a summary of acquisition policy and makes recommendations to implement law and policy.
Resumo:
The leverage and debt maturity choices of real estate companies are interdependent, and are not made separately as is often assumed in the literature. We use three-stage least squares (3SLS) regression analysis to explore this interdependence for a sample of listed U.S. real estate companies and Real Estate Investment Trusts (REITs) traded between 1973 and 2006.We find substantial differences in the nature of the relationship between leverage and maturity for the two firm types. Leverage is a determinant of maturity for non-REITs, whereas maturity is a determinant of leverage for REITs. We also find that the drivers of capital structure choices in real estate companies and REITs clearly reflect the effects of the REIT regulation.
Resumo:
Eukaryotic genomes contain repetitive DNA sequences. This includes simple repeats and more complex transposable elements (TEs). Many TEs reach high copy numbers in the host genome, owing to their amplification abilities by specific mechanisms. There is growing evidence that TEs contribute to gene transcriptional regulation. However, excess of TE activity may lead to reduced genome stability. Therefore, TEs are suppressed by the transcriptional gene silencing machinery via specific chromatin modifications. In contrary, effectiveness of the epigenetic silencing mechanisms imposes risk for TE survival in the host genome. Therefore, TEs may have evolved specific strategies for bypassing epigenetic control and allowing the emergence of new TE copies. Recent studies suggested that the epigenetic silencing can be, at least transiently, attenuated by heat stress in A. thaliana. Heat stress induced strong transcriptional activation of COPIA78 family LTR-retrotransposons named ONSEN, and even their transposition in mutants deficient in siRNA-biogenesis. ONSEN transcriptional activation was facilitated by the presence of heat responsive elements (HREs) within the long terminal repeats, which serve as a binding platform for the HEAT SHOCK FACTORs (HSFs). This thesis focused on the evolution of ONSEN heat responsiveness in Brassicaceae. By using whole-transcriptome sequencing approach, multiple Arabidopsis lyrata ONSENs with conserved heat response were found and together with ONSENs from other Brassicaceae were used to reconstruct the evolution of ONSEN HREs. This indicated ancestral situation with two, in palindrome organized, HSF binding motifs. In the genera Arabidopsis and Ballantinia, a local duplication of this locus increased number of HSF binding motifs to four, forming a high-efficiency HRE. In addition, whole transcriptome analysis revealed novel heat-responsive TE families COPIA20, COPIA37 and HATE. Notably, HATE represents so far unknown COPIA family which occurs in several Brassicaceae species but is absent in A. thaliana. Putative HREs were identified within the LTRs of COPIA20, COPIA37 and HATE of A. lyrata, and could be preliminarily validated by transcriptional analysis upon heat induction in subsequent survey of Brassicaeae species. Subsequent phylogenetic analysis indicated a repeated evolution of heat responsiveness within Brassicaceae COPIA LTR-retrotransposons. This indicates that acquisition of heat responsiveness may represent a successful strategy for survival of TEs within the host genome.
Resumo:
In this note we study the endomorphisms of certain Banach algebras of infinitely differentiable functions on compact plane sets, associated with weight sequences M. These algebras were originally studied by Dales, Davie and McClure. In a previous paper this problem was solved in the case of the unit interval for many weights M. Here we investigate the extent to which the methods used previously apply to general compact plane sets, and introduce some new methods. In particular, we obtain many results for the case of the closed unit disc. This research was supported by EPSRC grant GR/M31132
Resumo:
In Costa Rica, many secondary students have serious difficulties to establish relationships between mathematics and real-life contexts. They question the utilitarian role of the school mathematics. This fact motivated the research object of this report which evidences the need to overcome methodologies unrelated to students’ reality, toward new didactical options that help students to value mathematics, reasoning and its applications, connecting it with their socio-cultural context. The research used a case study as a qualitative methodology and the social constructivism as an educational paradigm in which the knowledge is built by the student; as a product of his social interactions. A collection of learning situations was designed, validated, and implemented. It allowed establishing relationships between mathematical concepts and the socio-cultural context of participants. It analyzed the impact of students’socio-cultural context in their mathematics learning of basic concepts of real variable functions, consistent with the Ministry of Education (MEP) Official Program. Among the results, it was found that using students’sociocultural context improved their motivational processes, mathematics sense making, and promoted cooperative social interactions. It was evidenced that contextualized learning situations favored concepts comprehension that allow students to see mathematics as a discipline closely related with their every-day life.
Resumo:
We present new methodologies to generate rational function approximations of broadband electromagnetic responses of linear and passive networks of high-speed interconnects, and to construct SPICE-compatible, equivalent circuit representations of the generated rational functions. These new methodologies are driven by the desire to improve the computational efficiency of the rational function fitting process, and to ensure enhanced accuracy of the generated rational function interpolation and its equivalent circuit representation. Toward this goal, we propose two new methodologies for rational function approximation of high-speed interconnect network responses. The first one relies on the use of both time-domain and frequency-domain data, obtained either through measurement or numerical simulation, to generate a rational function representation that extrapolates the input, early-time transient response data to late-time response while at the same time providing a means to both interpolate and extrapolate the used frequency-domain data. The aforementioned hybrid methodology can be considered as a generalization of the frequency-domain rational function fitting utilizing frequency-domain response data only, and the time-domain rational function fitting utilizing transient response data only. In this context, a guideline is proposed for estimating the order of the rational function approximation from transient data. The availability of such an estimate expedites the time-domain rational function fitting process. The second approach relies on the extraction of the delay associated with causal electromagnetic responses of interconnect systems to provide for a more stable rational function process utilizing a lower-order rational function interpolation. A distinctive feature of the proposed methodology is its utilization of scattering parameters. For both methodologies, the approach of fitting the electromagnetic network matrix one element at a time is applied. It is shown that, with regard to the computational cost of the rational function fitting process, such an element-by-element rational function fitting is more advantageous than full matrix fitting for systems with a large number of ports. Despite the disadvantage that different sets of poles are used in the rational function of different elements in the network matrix, such an approach provides for improved accuracy in the fitting of network matrices of systems characterized by both strongly coupled and weakly coupled ports. Finally, in order to provide a means for enforcing passivity in the adopted element-by-element rational function fitting approach, the methodology for passivity enforcement via quadratic programming is modified appropriately for this purpose and demonstrated in the context of element-by-element rational function fitting of the admittance matrix of an electromagnetic multiport.
Resumo:
The real-quaternionic indicator, also called the $\delta$ indicator, indicates if a self-conjugate representation is of real or quaternionic type. It is closely related to the Frobenius-Schur indicator, which we call the $\varepsilon$ indicator. The Frobenius-Schur indicator $\varepsilon(\pi)$ is known to be given by a particular value of the central character. We would like a similar result for the $\delta$ indicator. When $G$ is compact, $\delta(\pi)$ and $\varepsilon(\pi)$ coincide. In general, they are not necessarily the same. In this thesis, we will give a relation between the two indicators when $G$ is a real reductive algebraic group. This relation also leads to a formula for $\delta(\pi)$ in terms of the central character. For the second part, we consider the construction of the local Langlands correspondence of $GL(2,F)$ when $F$ is a non-Archimedean local field with odd residual characteristics. By re-examining the construction, we provide new proofs to some important properties of the correspondence. Namely, the construction is independent of the choice of additive character in the theta correspondence.
Resumo:
Given a bent function f (x) of n variables, its max-weight and min-weight functions are introduced as the Boolean functions f + (x) and f − (x) whose supports are the sets {a ∈ Fn2 | w( f ⊕la) = 2n−1+2 n 2 −1} and {a ∈ Fn2 | w( f ⊕la) = 2n−1−2 n 2 −1} respectively, where w( f ⊕ la) denotes the Hamming weight of the Boolean function f (x) ⊕ la(x) and la(x) is the linear function defined by a ∈ Fn2 . f + (x) and f − (x) are proved to be bent functions. Furthermore, combining the 4 minterms of 2 variables with the max-weight or min-weight functions of a 4-tuple ( f0(x), f1(x), f2(x), f3(x)) of bent functions of n variables such that f0(x) ⊕ f1(x) ⊕ f2(x) ⊕ f3(x) = 1, a bent function of n + 2 variables is obtained. A family of 4-tuples of bent functions satisfying the above condition is introduced, and finally, the number of bent functions we can construct using the method introduced in this paper are obtained. Also, our construction is compared with other constructions of bent functions.
Resumo:
In 2010, the American Association of State Highway and Transportation Officials (AASHTO) released a safety analysis software system known as SafetyAnalyst. SafetyAnalyst implements the empirical Bayes (EB) method, which requires the use of Safety Performance Functions (SPFs). The system is equipped with a set of national default SPFs, and the software calibrates the default SPFs to represent the agency’s safety performance. However, it is recommended that agencies generate agency-specific SPFs whenever possible. Many investigators support the view that the agency-specific SPFs represent the agency data better than the national default SPFs calibrated to agency data. Furthermore, it is believed that the crash trends in Florida are different from the states whose data were used to develop the national default SPFs. In this dissertation, Florida-specific SPFs were developed using the 2008 Roadway Characteristics Inventory (RCI) data and crash and traffic data from 2007-2010 for both total and fatal and injury (FI) crashes. The data were randomly divided into two sets, one for calibration (70% of the data) and another for validation (30% of the data). The negative binomial (NB) model was used to develop the Florida-specific SPFs for each of the subtypes of roadway segments, intersections and ramps, using the calibration data. Statistical goodness-of-fit tests were performed on the calibrated models, which were then validated using the validation data set. The results were compared in order to assess the transferability of the Florida-specific SPF models. The default SafetyAnalyst SPFs were calibrated to Florida data by adjusting the national default SPFs with local calibration factors. The performance of the Florida-specific SPFs and SafetyAnalyst default SPFs calibrated to Florida data were then compared using a number of methods, including visual plots and statistical goodness-of-fit tests. The plots of SPFs against the observed crash data were used to compare the prediction performance of the two models. Three goodness-of-fit tests, represented by the mean absolute deviance (MAD), the mean square prediction error (MSPE), and Freeman-Tukey R2 (R2FT), were also used for comparison in order to identify the better-fitting model. The results showed that Florida-specific SPFs yielded better prediction performance than the national default SPFs calibrated to Florida data. The performance of Florida-specific SPFs was further compared with that of the full SPFs, which include both traffic and geometric variables, in two major applications of SPFs, i.e., crash prediction and identification of high crash locations. The results showed that both SPF models yielded very similar performance in both applications. These empirical results support the use of the flow-only SPF models adopted in SafetyAnalyst, which require much less effort to develop compared to full SPFs.
Resumo:
High-resolution melt (HRM) analysis can identify sequence polymorphisms by comparing the melting curves of amplicons generated by real-time PCR amplification. We describe the application of this technique to identify Mycobacterium avium subspecies paratuberculosis types I, II, and III. The HRM approach was based on type-specific nucleotide sequences in MAP1506, a member of the PPE (proline-proline-glutamic acid) gene family.
Resumo:
In order to advance the knowledge about precipitation development over Madeira island, four rainfall patterns are investigated based on high-resolution numerical simulations performed with the MESO-NH model. The main environmental conditions during these precipitation periods are examined, and important factors leading to significant accumulated precipitation in Madeira are shown. We found that the combination of orographic effect and atmospheric conditions is essential for the establishment of each situation. Under a moist and conditionally unstable atmosphere, convection over the island is triggered, and its location was determined mainly by variations of the ambient flow, which was also associated with different moist Froude numbers. Interestingly, our results showed some similarities with situations discussed in idealized studies. However, the real variations of the atmospheric configuration confirm the complexity of significant precipitation development in mountainous regions. In addition, precipitating systems initially formed over the ocean were simulated reaching the island. The four periods were characterised by different time durations, and the local terrain interacting with the mesoscale circulation was decisive in producing a large part of the precipitation, which concentrated in distinct regions of the island induced by the airflow dynamic.
Resumo:
Advancements in technology have enabled increasingly sophisticated automation to be introduced into the flight decks of modern aircraft. Generally, this automation was added to accomplish worthy objectives such as reducing flight crew workload, adding additional capability, or increasing fuel economy. Automation is necessary due to the fact that not all of the functions required for mission accomplishment in today’s complex aircraft are within the capabilities of the unaided human operator, who lacks the sensory capacity to detect much of the information required for flight. To a large extent, these objectives have been achieved. Nevertheless, despite all the benefits from the increasing amounts of highly reliable automation, vulnerabilities do exist in flight crew management of automation and Situation Awareness (SA). Issues associated with flight crew management of automation include: • Pilot understanding of automation’s capabilities, limitations, modes, and operating principles and techniques. • Differing pilot decisions about the appropriate automation level to use or whether to turn automation on or off when they get into unusual or emergency situations. • Human-Machine Interfaces (HMIs) are not always easy to use, and this aspect could be problematic when pilots experience high workload situations. • Complex automation interfaces, large differences in automation philosophy and implementation among different aircraft types, and inadequate training also contribute to deficiencies in flight crew understanding of automation.
Resumo:
One of the most pervading concepts underlying computational models of information processing in the brain is linear input integration of rate coded uni-variate information by neurons. After a suitable learning process this results in neuronal structures that statically represent knowledge as a vector of real valued synaptic weights. Although this general framework has contributed to the many successes of connectionism, in this paper we argue that for all but the most basic of cognitive processes, a more complex, multi-variate dynamic neural coding mechanism is required - knowledge should not be spacially bound to a particular neuron or group of neurons. We conclude the paper with discussion of a simple experiment that illustrates dynamic knowledge representation in a spiking neuron connectionist system.