925 resultados para % of sample area
Resumo:
The authors present a very interesting criterion for choosing a rectangular foundation.The writers should like to point out that the obtention of minimum area can be reduced to the problem of finding the minimum of x*+y*, subjected to the condition x*.y*=k2 whose solution is evidently x*=y*=k
Resumo:
Background: The Strengths and Difficulties Questionnaire (SDQ) is a tool to measure the risk for mental disorders in children. The aim of this study is to describe the diagnostic efficiency and internal structure of the SDQ in the sample of children studied in the Spanish National Health Survey 2006. Methods: A representative sample of 6,773 children aged 4 to 15 years was studied. The data were obtained using the Minors Questionnaire in the Spanish National Health Survey 2006. The ROC curve was constructed and calculations made of the area under the curve, sensitivity, specificity and the Youden J indices. The factorial structure was studied using models of exploratory factorial analysis (EFA) and confirmatory factorial analysis (CFA). Results: The prevalence of behavioural disorders varied between 0.47% and 1.18% according to the requisites of the diagnostic definition. The area under the ROC curve varied from 0.84 to 0.91 according to the diagnosis. Factor models were cross-validated by means of two different random subsamples for EFA and CFA. An EFA suggested a three correlated factor model. CFA confirmed this model. A five-factor model according to EFA and the theoretical five-factor model described in the bibliography were also confirmed. The reliabilities of the factors of the different models were acceptable (>0.70, except for one factor with reliability 0.62). Conclusions: The diagnostic behaviour of the SDQ in the Spanish population is within the working limits described in other countries. According to the results obtained in this study, the diagnostic efficiency of the questionnaire is adequate to identify probable cases of psychiatric disorders in low prevalence populations. Regarding the factorial structure we found that both the five and the three factor models fit the data with acceptable goodness of fit indexes, the latter including an externalization and internalization dimension and perhaps a meaningful positive social dimension. Accordingly, we recommend studying whether these differences depend on sociocultural factors or are, in fact, due to methodological questions.
Resumo:
Tese de doutoramento, Ciências do Mar, Universidade de Lisboa, Faculdade de Ciências, 2016
Resumo:
Introduction. Unintended as it was, the European Court of Justice (ECJ, the Court, the Court of the EU) has played an extremely important role in the construction of the Area of Freedom Security and Justice (AFSJ). The AFSJ was set up by the Treaty of Amsterdam in 1997 and only entered into force in May 1999. The fact that this is a new field of EU competence, poses afresh all the fundamental questions – both political and legal – triggered by European integration, namely in terms of: a) distribution of powers between the Union and its Member States, b) attribution of competences between the various EU Institutions, c) direct effect and supremacy of EU rules, d) scope of competence of the ECJ, and e) extent of the protection given to fundamental rights. The above questions have prompted judicial solutions which take into account both the extremely sensible fields of law upon which the AFSJ is anchored, and the EU’s highly inconvenient three-pillar institutional framework.1 The ECJ is the body whose institutional role is to benefit most from this upcoming ‘depilarisation’, possibly more than that of the European Parliament. This structure is on the verge of being abandoned, provided the Treaty of Lisbon enters into force.2 However spectacular this formal boost of the Court’s competence, the changes in real terms are not going to be that dramatic. This apparent contradiction is explained, to a large extent, by the fact that the Court has in many ways ‘provoked’, or even ‘anticipated’, the depilarisation of its own jurisdictional role, already under the existing three-pillar structure. Simply put, under the new – post Treaty of Lisbon – regime, the Court will have full jurisdiction over all AFSJ matters, as those are going to be fully integrated in what is now the first pillar. Some limitations will continue to apply, however, while a special AFSJ procedure will be institutionalised. Indeed, if we look into the new Treaty we may identify general modifications to the Court’s structure and jurisdiction affecting the AFSJ (section 2), modifications in the field of the AFSJ stemming from the abolition of the pillar structure (section 3) and, finally, some rules specifically applicable to the AFSJ (section 4).
Resumo:
The statements made in recent weeks by Russian officials, and especially President Vladimir Putin, in connection with Moscow’s policy towards Ukraine, may suggest that the emergence of a certain doctrine of Russian foreign and security policy is at hand, especially in relation to the post-Soviet area. Most of the arguments at the core of this doctrine are not new, but recently they have been formulated more openly and in more radical terms. Those arguments concern the role of Russia as the defender of Russian-speaking communities abroad and the guarantor of their rights, as well as specifically understood good neighbourly relations (meaning in fact limited sovereignty) as a precondition that must be met in order for Moscow to recognise the independence and territorial integrity of post-Soviet states. However, the new doctrine also includes arguments which have not been raised before, or have hitherto only been formulated on rare occasions, and which may indicate the future evolution of Russia’s policy. Specifically, this refers to Russia’s use of extralegal categories, such as national interest, truth and justice, to justify its policy, and its recognition of military force as a legitimate instrument to defend its compatriots abroad. This doctrine is effectively an outline of the conceptual foundation for Russian dominance in the post-Soviet area. It offers a justification for the efforts to restore the unity of the ‘Russian nation’ (or more broadly, the Russian-speaking community), within a bloc pursuing close integration (the Eurasian Economic Union), or even within a single state encompassing at least parts of that area. As such, it poses a challenge for the West, which Moscow sees as the main opponent of Russia’s plans to build a new order in Europe (Eurasia) that would undermine the post-Cold War order.
Resumo:
We estimate the 'fundamental' component of euro area sovereign bond yield spreads, i.e. the part of bond spreads that can be justified by country-specific economic factors, euro area economic fundamentals, and international influences. The yield spread decomposition is achieved using a multi-market, no-arbitrage affine term structure model with a unique pricing kernel. More specifically, we use the canonical representation proposed by Joslin, Singleton, and Zhu (2011) and introduce next to standard spanned factors a set of unspanned macro factors, as in Joslin, Priebsch, and Singleton (2013). The model is applied to yield curve data from Belgium, France, Germany, Italy, and Spain over the period 2005-2013. Overall, our results show that economic fundamentals are the dominant drivers behind sovereign bond spreads. Nevertheless, shocks unrelated to the fundamental component of the spread have played an important role in the dynamics of bond spreads since the intensification of the sovereign debt crisis in the summer of 2011
Resumo:
The geothermal regime of the western margin of the Great Bahama Bank was examined using the bottom hole temperature and thermal conductivity measurements obtained during and after Ocean Drilling Program (ODP) Leg 166. This study focuses on the data from the drilling transect of Sites 1003 through 1007. These data reveal two important observational characteristics. First, temperature vs. cumulative thermal resistance profiles from all the drill sites show significant curvature in the depth range of 40 to 100 mbsf. They tend to be of concave-upward shape. Second, the conductive background heat-flow values for these five drill sites, determined from deep, linear parts of the geothermal profiles, show a systematic variation along the drilling transect. Heat flow is 43-45 mW/m**2 on the seafloor away from the bank and decreases upslope to ~35 mW/m**2. We examine three mechanisms as potential causes for the curved geothermal profiles. They are: (1) a recent increase in sedimentation rate, (2) influx of seawater into shallow sediments, and (3) temporal fluctuation of the bottom water temperature (BWT). Our analysis shows that the first mechanism is negligible. The second mechanism may explain the data from Sites 1004 and 1005. The temperature profile of Site 1006 is most easily explained by the third mechanism. We reconstruct the history of BWT at this site by solving the inverse heat conduction problem. The inversion result indicates gradual warming throughout this century by ~1°C and is agreeable to other hydrographic and climatic data from the western subtropic Atlantic. However, data from Sites 1003 and 1007 do not seem to show such trends. Therefore, none of the three mechanisms tested here explain the observations from all the drill sites. As for the lateral variation of the background heat flow along the drill transect, we believe that much of it is caused by the thermal effect of the topographic variation. We model this effect by obtaining a two-dimensional analytical solution. The model suggests that the background heat flow of this area is ~43 mW/m**2, a value similar to the background heat flow determined for the Gulf of Mexico in the opposite side of the Florida carbonate platform.
Resumo:
Clay mineralogy and geotechnical properties of Tarras clay, basin clays and tills from some parts of Schleswig-Holstein: Tarras clay of lower Eocene age, Quaternary till containing various admixtures of Tarras clay as well as basin clay and varve-clay from Schleswig-Holstein were investigated. Grain size distribution and soil mechanic characteristics were determined, which indicated different geotechnical properties for each sediment type.
Resumo:
We combined the analysis of sediment trap data and satellite-derived sea surface chlorophyll to quantify the amount of organic carbon export to the deep sea in the upwelling induced high production area off northwest Africa. In contrast to the generally global or basin-wide adoption of export models, we used a regionally fitted empirical model. Furthermore, the application of our model was restricted to a dynamically defined region of high chlorophyll concentration in order to restrict the model application to an environment of more homogeneous export processes. We developed a correlation-based approximation to estimate the surface source area for a sediment trap deployed from 11 June 1998 to 7 November 1999 at 21.25°N latitude and 20.64°W longitude off Cape Blanc. We also developed a regression model of chlorophyll and export of organic carbon to the 1000 m depth level. Carbon export was calculated for an area of high chlorophyll concentration (>1 mg/m**3) adjacent to the coast on a daily basis. The resulting zone of high chlorophyll concentration was 20,000-800,000 km**2 large and yielded a yearly export of 1.123 to 2.620 Tg organic carbon. The average organic carbon export within the area of high chlorophyll concentration was 20.6 mg/m**2d comparable to 13.3 mg/m**2d as found in the sediment trap results if normalized to the 1000 m level. We found strong interannual variability in export. The period autumn 1998 to summer 1999 was exceeding the mean of the other three comparable periods by a factor of 2.25. We believe that this approach of using more regionally fitted models can be successfully transferred even to different oceanographic regions by selecting appropriate definition criteria like chlorophyll concentration for the definition of an area to which it is applicable.
Resumo:
We report a method using variation in the chloroplast genome (cpDNA) to test whether oak stands of unknown provenance are of native and/or local origin. As an example, a sample of test oaks, of mostly unknown status in relation to nativeness and localness, were surveyed for cpDNA type. The sample comprised 126 selected trees, derived from 16 British seed stands, and 75 trees, selected for their superior phenotype (201 tree samples in total). To establish whether these two test groups are native and local, their cpDNA type was compared with that of material from known autochthonous origin (results of a previous study which examined variation in 1076 trees from 224 populations distributed across Great Britain). In the previous survey of autochthonous material, four cpDNA types were identified as native; thus if a test sample possessed a new haplotype then it could be classed as non-native. Every one of the 201 test samples possessed one of the four cpDNA types found within the autochthonous sample. Therefore none could be proven to be introduced and, on this basis, was considered likely to be native. The previous study of autochthonous material also found that cpDNA variation was highly structured geographically and, therefore, if the cpDNA type of the test sample did not match that of neighbouring autochthonous trees then it could be considered to be non-local. A high proportion of the seed stand group (44.2 per cent) and the phenotypically superior trees (58.7 per cent) possessed a cpDNA haplotype which matched that of the neighbouring autochthonous trees and, therefore, can be considered as local, or at least cannot be proven to be introduced. The remainder of the test sample could be divided into those which did not grow in an area of overall dominance (18.7 per cent of seed stand trees and 28 per cent of phenotypically superior) and those which failed to match the neighbouring autochthonous haplotype (37.1 per cent and 13.3 per cent, respectively). Most of the non-matching test samples were located within 50 km of an area dominated by a matching autochthonous haplotype (96.0 per cent and 93.5 per cent, respectively), and potentially indicates only local transfer. Whilst such genetic fingerprinting tests have proven useful for assessing the origin of stands of unknown provenance, there are potential limitations to using a marker from the chloroplast genome (mostly adaptively neutral) for classifying seed material into categories which have adaptive implications. These limitations are discussed, particularly within the context of selecting adaptively superior material for restocking native forests.
Resumo:
This paper gives a review of recent progress in the design of numerical methods for computing the trajectories (sample paths) of solutions to stochastic differential equations. We give a brief survey of the area focusing on a number of application areas where approximations to strong solutions are important, with a particular focus on computational biology applications, and give the necessary analytical tools for understanding some of the important concepts associated with stochastic processes. We present the stochastic Taylor series expansion as the fundamental mechanism for constructing effective numerical methods, give general results that relate local and global order of convergence and mention the Magnus expansion as a mechanism for designing methods that preserve the underlying structure of the problem. We also present various classes of explicit and implicit methods for strong solutions, based on the underlying structure of the problem. Finally, we discuss implementation issues relating to maintaining the Brownian path, efficient simulation of stochastic integrals and variable-step-size implementations based on various types of control.
Resumo:
In this paper we apply a new method for the determination of surface area of carbonaceous materials, using the local surface excess isotherms obtained from the Grand Canonical Monte Carlo simulation and a concept of area distribution in terms of energy well-depth of solid–fluid interaction. The range of this well-depth considered in our GCMC simulation is from 10 to 100 K, which is wide enough to cover all carbon surfaces that we dealt with (for comparison, the well-depth for perfect graphite surface is about 58 K). Having the set of local surface excess isotherms and the differential area distribution, the overall adsorption isotherm can be obtained in an integral form. Thus, given the experimental data of nitrogen or argon adsorption on a carbon material, the differential area distribution can be obtained from the inversion process, using the regularization method. The total surface area is then obtained as the area of this distribution. We test this approach with a number of data in the literature, and compare our GCMC-surface area with that obtained from the classical BET method. In general, we find that the difference between these two surface areas is about 10%, indicating the need to reliably determine the surface area with a very consistent method. We, therefore, suggest the approach of this paper as an alternative to the BET method because of the long-recognized unrealistic assumptions used in the BET theory. Beside the surface area obtained by this method, it also provides information about the differential area distribution versus the well-depth. This information could be used as a microscopic finger-print of the carbon surface. It is expected that samples prepared from different precursors and different activation conditions will have distinct finger-prints. We illustrate this with Cabot BP120, 280 and 460 samples, and the differential area distributions obtained from the adsorption of argon at 77 K and nitrogen also at 77 K have exactly the same patterns, suggesting the characteristics of this carbon.
Resumo:
This study uses a sample of young Australian twins to examine whether the findings reported in [Ashenfelter, Orley and Krueger, Alan, (1994). 'Estimates of the Economic Return to Schooling from a New Sample of Twins', American Economic Review, Vol. 84, No. 5, pp.1157-73] and [Miller, P.W., Mulvey, C and Martin, N., (1994). 'What Do Twins Studies Tell Us About the Economic Returns to Education?: A Comparison of Australian and US Findings', Western Australian Labour Market Research Centre Discussion Paper 94/4] are robust to choice of sample and dependent variable. The economic return to schooling in Australia is between 5 and 7 percent when account is taken of genetic and family effects using either fixed-effects models or the selection effects model of Ashenfelter and Krueger. Given the similarity of the findings in this and in related studies, it would appear that the models applied by [Ashenfelter, Orley and Krueger, Alan, (1994). 'Estimates of the Economic Return to Schooling from a New Sample of Twins', American Economic Review, Vol. 84, No. 5, pp. 1157-73] are robust. Moreover, viewing the OLS and IV estimators as lower and upper bounds in the manner of [Black, Dan A., Berger, Mark C., and Scott, Frank C., (2000). 'Bounding Parameter Estimates with Nonclassical Measurement Error', Journal of the American Statistical Association, Vol. 95, No.451, pp.739-748], it is shown that the bounds on the return to schooling in Australia are much tighter than in [Ashenfelter, Orley and Krueger, Alan, (1994). 'Estimates of the Economic Return to Schooling from a New Sample of Twins', American Economic Review, Vol. 84, No. 5, pp. 1157-73], and the return is bounded at a much lower level than in the US. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
The Vapnik-Chervonenkis (VC) dimension is a combinatorial measure of a certain class of machine learning problems, which may be used to obtain upper and lower bounds on the number of training examples needed to learn to prescribed levels of accuracy. Most of the known bounds apply to the Probably Approximately Correct (PAC) framework, which is the framework within which we work in this paper. For a learning problem with some known VC dimension, much is known about the order of growth of the sample-size requirement of the problem, as a function of the PAC parameters. The exact value of sample-size requirement is however less well-known, and depends heavily on the particular learning algorithm being used. This is a major obstacle to the practical application of the VC dimension. Hence it is important to know exactly how the sample-size requirement depends on VC dimension, and with that in mind, we describe a general algorithm for learning problems having VC dimension 1. Its sample-size requirement is minimal (as a function of the PAC parameters), and turns out to be the same for all non-trivial learning problems having VC dimension 1. While the method used cannot be naively generalised to higher VC dimension, it suggests that optimal algorithm-dependent bounds may improve substantially on current upper bounds.
Resumo:
We use the Fleissig and Whitney (2003) weak separability test to determine admissible levels of monetary aggregation for the Euro area. We find that the Euro area monetary assets in M2 and M3 are weakly separable and construct admissible Divisia monetary aggregates for these assets. We evaluate the Divisia aggregates as indicator variables, building on Nelson (2002), Reimers (2002), and Stracca (2004). Specifically, we show that real growth of the admissible Divisia aggregates enter the Euro area IS curve positively and significantly for the period from 1980 to 2005. Out of sample, we show that Divisia M2 and M3 appear to contain useful information for forecasting Euro area inflation.