915 resultados para all substring common subsequence problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract This study presents a model intercomparison of four regional climate models (RCMs) and one variable resolution atmospheric general circulation model (AGCM) applied over Europe with special focus on the hydrological cycle and the surface energy budget. The models simulated the 15 years from 1979 to 1993 by using quasi-observed boundary conditions derived from ECMWF re-analyses (ERA). The model intercomparison focuses on two large atchments representing two different climate conditions covering two areas of major research interest within Europe. The first is the Danube catchment which represents a continental climate dominated by advection from the surrounding land areas. It is used to analyse the common model error of a too dry and too warm simulation of the summertime climate of southeastern Europe. This summer warming and drying problem is seen in many RCMs, and to a less extent in GCMs. The second area is the Baltic Sea catchment which represents maritime climate dominated by advection from the ocean and from the Baltic Sea. This catchment is a research area of many studies within Europe and also covered by the BALTEX program. The observed data used are monthly mean surface air temperature, precipitation and river discharge. For all models, these are used to estimate mean monthly biases of all components of the hydrological cycle over land. In addition, the mean monthly deviations of the surface energy fluxes from ERA data are computed. Atmospheric moisture fluxes from ERA are compared with those of one model to provide an independent estimate of the convergence bias derived from the observed data. These help to add weight to some of the inferred estimates and explain some of the discrepancies between them. An evaluation of these biases and deviations suggests possible sources of error in each of the models. For the Danube catchment, systematic errors in the dynamics cause the prominent summer drying problem for three of the RCMs, while for the fourth RCM this is related to deficiencies in the land surface parametrization. The AGCM does not show this drying problem. For the Baltic Sea catchment, all models similarily overestimate the precipitation throughout the year except during the summer. This model deficit is probably caused by the internal model parametrizations, such as the large-scale condensation and the convection schemes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – The purpose of this study is to examine the relationship between business-level strategy and organisational performance and to test the applicability of Porter's generic strategies in explaining differences in the performance of organisations. Design/methodology/approach – The study was focussed on manufacturing firms in the UK belonging to the electrical and mechanical engineering sectors. Data were collected through a postal survey using the survey instrument from 124 organisations and the respondents were all at CEO level. Both objective and subjective measures were used to assess performance. Non-response bias was assessed statistically and it was not found to be a major problem affecting this study. Appropriate measures were taken to ensure that common method variance (CMV) does not affect the results of this study. Statistical tests indicated that CMV problem does not affect the results of this study. Findings – The results of this study indicate that firms adopting one of the strategies, namely cost-leadership or differentiation, perform better than “stuck-in-the-middle” firms which do not have a dominant strategic orientation. The integrated strategy group has lower performance compared with cost-leaders and differentiators in terms of financial performance measures. This provides support for Porter's view that combination strategies are unlikely to be effective in organisations. However, the cost-leadership and differentiation strategies were not strongly correlated with the financial performance measures indicating the limitations of Porter's generic strategies in explaining performance heterogeneity in organisations. Originality/value – This study makes an important contribution to the literature by identifying some of the gaps in the literature through a systematic literature review and addressing those gaps.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The construction field is dynamic and dominated by complex, ill-defined problems for which myriad possible solutions exist. Teaching students to solve construction-related problems requires an understanding of the nature of these complex problems as well as the implementation of effective instructional strategies to address them. Traditional approaches to teaching construction planning and management have long been criticized for presenting students primarily with well-defined problems - an approach inconsistent with the challenges encountered in the industry. However, growing evidence suggests that employing innovative teaching approaches, such as interactive simulation games, offers more active, hands-on and problem-based learning opportunities for students to synthesize and test acquired knowledge more closely aligned with real-life construction scenarios. Simulation games have demonstrated educational value in increasing student problem solving skills and motivation through critical attributes such as interaction and feedback-supported active learning. Nevertheless, broad acceptance of simulation games in construction engineering education remains limited. While recognizing benefits, research focused on the role of simulation games in educational settings lacks a unified approach to developing, implementing and evaluating these games. To address this gap, this paper provides an overview of the challenges associated with evaluating the effectiveness of simulation games in construction education that still impede their wide adoption. An overview of the current status, as well as the results from recently implemented Virtual Construction Simulator (VCS) game at Penn State provide lessons learned, and are intended to guide future efforts in developing interactive simulation games to reach their full potential.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A potential problem with Ensemble Kalman Filter is the implicit Gaussian assumption at analysis times. Here we explore the performance of a recently proposed fully nonlinear particle filter on a high-dimensional but simplified ocean model, in which the Gaussian assumption is not made. The model simulates the evolution of the vorticity field in time, described by the barotropic vorticity equation, in a highly nonlinear flow regime. While common knowledge is that particle filters are inefficient and need large numbers of model runs to avoid degeneracy, the newly developed particle filter needs only of the order of 10-100 particles on large scale problems. The crucial new ingredient is that the proposal density cannot only be used to ensure all particles end up in high-probability regions of state space as defined by the observations, but also to ensure that most of the particles have similar weights. Using identical twin experiments we found that the ensemble mean follows the truth reliably, and the difference from the truth is captured by the ensemble spread. A rank histogram is used to show that the truth run is indistinguishable from any of the particles, showing statistical consistency of the method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1  A set of 316 modern surface pollen samples, sampling all the alpine vegetation types that occur on the Tibetan Plateau, has been compiled and analysed. Between 82 and 92% of the pollen present in these samples is derived from only 28 major taxa. These 28 taxa include examples of both tree (AP) and herb (NAP) pollen types. 2  Most of the modern surface pollen samples accurately reflect the composition of the modern vegetation in the sampling region. However, airborne dust-trap pollen samples do not provide a reliable assessment of the modern vegetation. Dust-trap samples contain much higher percentages of tree pollen than non-dust-trap samples, and many of the taxa present are exotic. In the extremely windy environments of the Tibetan Plateau, contamination of dust-trap samples by long-distance transport of exotic pollen is a serious problem. 3  The most characteristic vegetation types present on the Tibetan Plateau are alpine meadows, steppe and desert. Non-arboreal pollen (NAP) therefore dominates the pollen samples in most regions. Percentages of arboreal pollen (AP) are high in samples from the southern and eastern Tibetan Plateau, where alpine forests are an important component of the vegetation. The relative importance of forest and non-forest vegetation across the Plateau clearly follows climatic gradients: forests occur on the southern and eastern margins of the Plateau, supported by the penetration of moisture-bearing airmasses associated with the Indian and Pacific summer monsoons; open, treeless vegetation is dominant in the interior and northern margins of the Plateau, far from these moisture sources. 4  The different types of non-forest vegetation are characterized by different modern pollen assemblages. Thus, alpine deserts are characterized by high percentages of Chenopodiaceae and Artemisia, with Ephedra and Nitraria. Alpine meadows are characterized by high percentages of Cyperaceae and Artemisia, with Ranunculaceae and Polygonaceae. Alpine steppe is characterized by high abundances of Artemisia, with Compositae, Cruciferae and Chenopodiaceae. Although Artemisia is a common component of all non-forest vegetation types on the Tibetan Plateau, the presence of other taxa makes it possible to discriminate between the different vegetation types. 5  The good agreement between modern vegetation and modern surface pollen samples across the Tibetan Plateau provides a measure of the reliability of using pollen data to reconstruct past vegetation patterns in non-forested areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Low self-esteem is a common, disabling, and distressing problem that has been shown to be involved in the etiology and maintenance of range of Axis I disorders. Hence, it is a priority to develop effective treatments for low self-esteem. A cognitive-behavioral conceptualization of low self-esteem has been proposed and a cognitive-behavioral treatment (CBT) program described (Fennell, 1997, 1999). As yet there has been no systematic evaluation of this treatment with routine clinical populations. The current case report describes the assessment, formulation, and treatment of a patient with low self-esteem, depression, and anxiety symptoms. At the end of treatment (12 sessions over 6 months), and at 1-year follow-up, the treatment showed large effect sizes on measures of depression, anxiety, and self-esteem. The patient no longer met diagnostic criteria for any psychiatric disorder, and showed reliable and clinically significant change on all measures. As far as we are aware, there are no other published case studies of CBT for low self-esteem that report pre- and posttreatment evaluations, or follow-up data. Hence, this case provides an initial contribution to the evidence base for the efficacy of CBT for low self-esteem. However, further research is needed to confirm the efficacy of CBT for low self-esteem and to compare its efficacy and effectiveness to alternative treatments, including diagnosis-specific CBT protocols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article considers whether, in the context of armed conflicts, certain non-refoulement obligations of non-belligerent States can be derived from the 1949 Geneva Conventions. According to Common Article 1 (CA1) thereof, all High Contracting Parties (HCPs) undertake to ‘respect and to ensure respect’ for the four conventions ‘in all circumstances’. It is contended that CA1 applies both in international armed conflicts (IACs) and in non-international armed conflicts (NIACs). In turn, it is suggested that Common Article 3 (CA3) which regulates conduct in NIACs serves as a ‘minimum yardstick’ also applicable in IACs. It is widely (though not uniformly) acknowledged that the undertaking to ‘ensure respect’ in a given armed conflict extends to HCPs that are not parties to it; nevertheless, the precise scope of this undertaking is subject to scholarly debate. This article concerns situations where, in the course of an (international or non-international) armed conflict, persons ’taking no active part in hostilities’ flee from States where violations of CA3 are (likely to be) occurring to a non-belligerent State. Based on the undertaking in CA1, the central claim of this article is that, as long as risk of exposure to these violations persists, persons should not be refouled notwithstanding possible assessment of whether they qualify as refugees based on the 1951 Refugee Convention definition, or could be eligible for complementary or subsidiary forms of protection that are regulated in regional arrangements. The analysis does not affect the explicit protection from refoulement that the Fourth Geneva Convention accords to ‘protected persons’ (as defined in Article 4 thereof). It is submitted that CA1 should be read in tandem with other obligations of non-belligerent States under the 1949 Geneva Conventions. Most pertinently, all HCPs are required to take specific measures to repress ‘grave breaches’ and to take measures necessary for the suppression of all acts contrary to the 1949 Geneva Conventions other than the grave breaches. A HCP that is capable of protecting displaced persons from exposure to risks of violations of CA3 and nonetheless refoules them to face such risks is arguably failing to take lawful measures at its disposal in order to suppress acts contrary to the conventions and, consequently, fails to ‘ensure respect’ for the conventions. KEYWORDS Non-refoulement; International Armed Conflict; Non-International Armed Conflict; Common Article 1; Common Article 3

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The transformations in Slovak agriculture from the 1950s to the present day, considering both the generic (National and EU) and site-specific (local) drivers of landscape change, were analysed in five mountain study areas in the country. An interdisciplinary approach included analysis of population trends, evaluation of land use and landscape change combined with exploration of the perceptions of local stakeholders and results of previous biodiversity studies. The generic processes active from the 1950s to 1970s were critical for all study areas with impacts lasting right up until the present day. Agricultural collectivisation, agricultural intensification and land abandonment had negative effects in all study areas. However, the precise impacts on the landscape were different in the different study areas due to site-specific attributes (e.g. population trends, geographic localisation and local attitudes and opportunities), and these played a decisive role in determining the trajectory of change. Regional contrasts in rural development between these territories have increased in the last two decades, also due to the imperfect preconditions of governmental support. The recent Common Agricultural Policy developments are focused on maintenance of intensive large-scale farming rather than direct enhancement of agro-biodiversity and rural development at the local scale. In this context, local, site-specific attributes can and must form an essential part of rural development plans, to meet the demands for management of the diversity of agricultural mountain landscapes and facilitate the multifunctional role of agriculture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a generic basic semi-algebraic subset S of the space of generalized functions, that is a set given by (not necessarily countably many) polynomial constraints. We derive necessary and sufficient conditions for an infinite sequence of generalized functions to be realizable on S, namely to be the moment sequence of a finite measure concentrated on S. Our approach combines the classical results about the moment problem on nuclear spaces with the techniques recently developed to treat the moment problem on basic semi-algebraic sets of Rd. In this way, we determine realizability conditions that can be more easily verified than the well-known Haviland type conditions. Our result completely characterizes the support of the realizing measure in terms of its moments. As concrete examples of semi-algebraic sets of generalized functions, we consider the set of all Radon measures and the set of all the measures having bounded Radon–Nikodym density w.r.t. the Lebesgue measure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Summary Reasons for performing study: Metabonomics is emerging as a powerful tool for disease screening and investigating mammalian metabolism. This study aims to create a metabolic framework by producing a preliminary reference guide for the normal equine metabolic milieu. Objectives: To metabolically profile plasma, urine and faecal water from healthy racehorses using high resolution 1H-NMR spectroscopy and to provide a list of dominant metabolites present in each biofluid for the benefit of future research in this area. Study design: This study was performed using seven Thoroughbreds in race training at a single time-point. Urine and faecal samples were collected non-invasively and plasma was obtained from samples taken for routine clinical chemistry purposes. Methods: Biofluids were analysed using 1H-NMR spectroscopy. Metabolite assignment was achieved via a range of 1D and 2D experiments. Results: A total of 102 metabolites were assigned across the three biological matrices. A core metabonome of 14 metabolites was ubiquitous across all biofluids. All biological matrices provided a unique window on different aspects of systematic metabolism. Urine was the most populated metabolite matrix with 65 identified metabolites, 39 of which were unique to this biological compartment. A number of these were related to gut microbial host co-metabolism. Faecal samples were the most metabolically variable between animals; acetate was responsible for the majority (28%) of this variation. Short chain fatty acids were the predominant features identified within this biofluid by 1H-NMR spectroscopy. Conclusions: Metabonomics provides a platform for investigating complex and dynamic interactions between the host and its consortium of gut microbes and has the potential to uncover markers for health and disease in a variety of biofluids. Inherent variation in faecal extracts along with the relative abundance of microbial-mammalian metabolites in urine and invasive nature of plasma sampling, infers that urine is the most appropriate biofluid for the purposes of metabonomic analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Guise of the Good thesis has received much attention since Anscombe's brief defence in her book Intention. I approach it here from a less common perspective - indirectly, via a theory explaining how it is that moral behaviour is even possible. After setting out how morality requires the employment of a fundamental test, I argue that moral behaviour involves orientation toward the good. Immoral behaviour cannot, however, involve orientation to evil as such, given the theory of evil as privation. There must always be orientation to good of some kind for immorality even to be possible. Evil can, nevertheless, be intended, but this must be carefully understood in terms of the metaphysic of good and evil I set out. Given that metaphysic, the Guise of the Good is a virtual corollary.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Classical regression methods take vectors as covariates and estimate the corresponding vectors of regression parameters. When addressing regression problems on covariates of more complex form such as multi-dimensional arrays (i.e. tensors), traditional computational models can be severely compromised by ultrahigh dimensionality as well as complex structure. By exploiting the special structure of tensor covariates, the tensor regression model provides a promising solution to reduce the model’s dimensionality to a manageable level, thus leading to efficient estimation. Most of the existing tensor-based methods independently estimate each individual regression problem based on tensor decomposition which allows the simultaneous projections of an input tensor to more than one direction along each mode. As a matter of fact, multi-dimensional data are collected under the same or very similar conditions, so that data share some common latent components but can also have their own independent parameters for each regression task. Therefore, it is beneficial to analyse regression parameters among all the regressions in a linked way. In this paper, we propose a tensor regression model based on Tucker Decomposition, which identifies not only the common components of parameters across all the regression tasks, but also independent factors contributing to each particular regression task simultaneously. Under this paradigm, the number of independent parameters along each mode is constrained by a sparsity-preserving regulariser. Linked multiway parameter analysis and sparsity modeling further reduce the total number of parameters, with lower memory cost than their tensor-based counterparts. The effectiveness of the new method is demonstrated on real data sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Within the ESA Climate Change Initiative (CCI) project Aerosol_cci (2010–2013), algorithms for the production of long-term total column aerosol optical depth (AOD) datasets from European Earth Observation sensors are developed. Starting with eight existing pre-cursor algorithms three analysis steps are conducted to improve and qualify the algorithms: (1) a series of experiments applied to one month of global data to understand several major sensitivities to assumptions needed due to the ill-posed nature of the underlying inversion problem, (2) a round robin exercise of "best" versions of each of these algorithms (defined using the step 1 outcome) applied to four months of global data to identify mature algorithms, and (3) a comprehensive validation exercise applied to one complete year of global data produced by the algorithms selected as mature based on the round robin exercise. The algorithms tested included four using AATSR, three using MERIS and one using PARASOL. This paper summarizes the first step. Three experiments were conducted to assess the potential impact of major assumptions in the various aerosol retrieval algorithms. In the first experiment a common set of four aerosol components was used to provide all algorithms with the same assumptions. The second experiment introduced an aerosol property climatology, derived from a combination of model and sun photometer observations, as a priori information in the retrievals on the occurrence of the common aerosol components. The third experiment assessed the impact of using a common nadir cloud mask for AATSR and MERIS algorithms in order to characterize the sensitivity to remaining cloud contamination in the retrievals against the baseline dataset versions. The impact of the algorithm changes was assessed for one month (September 2008) of data: qualitatively by inspection of monthly mean AOD maps and quantitatively by comparing daily gridded satellite data against daily averaged AERONET sun photometer observations for the different versions of each algorithm globally (land and coastal) and for three regions with different aerosol regimes. The analysis allowed for an assessment of sensitivities of all algorithms, which helped define the best algorithm versions for the subsequent round robin exercise; all algorithms (except for MERIS) showed some, in parts significant, improvement. In particular, using common aerosol components and partly also a priori aerosol-type climatology is beneficial. On the other hand the use of an AATSR-based common cloud mask meant a clear improvement (though with significant reduction of coverage) for the MERIS standard product, but not for the algorithms using AATSR. It is noted that all these observations are mostly consistent for all five analyses (global land, global coastal, three regional), which can be understood well, since the set of aerosol components defined in Sect. 3.1 was explicitly designed to cover different global aerosol regimes (with low and high absorption fine mode, sea salt and dust).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Let X be a locally compact Polish space. A random measure on X is a probability measure on the space of all (nonnegative) Radon measures on X. Denote by K(X) the cone of all Radon measures η on X which are of the form η =

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Atmospheric pollution over South Asia attracts special attention due to its effects on regional climate, water cycle and human health. These effects are potentially growing owing to rising trends of anthropogenic aerosol emissions. In this study, the spatio-temporal aerosol distributions over South Asia from seven global aerosol models are evaluated against aerosol retrievals from NASA satellite sensors and ground-based measurements for the period of 2000–2007. Overall, substantial underestimations of aerosol loading over South Asia are found systematically in most model simulations. Averaged over the entire South Asia, the annual mean aerosol optical depth (AOD) is underestimated by a range 15 to 44% across models compared to MISR (Multi-angle Imaging SpectroRadiometer), which is the lowest bound among various satellite AOD retrievals (from MISR, SeaWiFS (Sea-Viewing Wide Field-of-View Sensor), MODIS (Moderate Resolution Imaging Spectroradiometer) Aqua and Terra). In particular during the post-monsoon and wintertime periods (i.e., October–January), when agricultural waste burning and anthropogenic emissions dominate, models fail to capture AOD and aerosol absorption optical depth (AAOD) over the Indo–Gangetic Plain (IGP) compared to ground-based Aerosol Robotic Network (AERONET) sunphotometer measurements. The underestimations of aerosol loading in models generally occur in the lower troposphere (below 2 km) based on the comparisons of aerosol extinction profiles calculated by the models with those from Cloud–Aerosol Lidar with Orthogonal Polarization (CALIOP) data. Furthermore, surface concentrations of all aerosol components (sulfate, nitrate, organic aerosol (OA) and black carbon (BC)) from the models are found much lower than in situ measurements in winter. Several possible causes for these common problems of underestimating aerosols in models during the post-monsoon and wintertime periods are identified: the aerosol hygroscopic growth and formation of secondary inorganic aerosol are suppressed in the models because relative humidity (RH) is biased far too low in the boundary layer and thus foggy conditions are poorly represented in current models, the nitrate aerosol is either missing or inadequately accounted for, and emissions from agricultural waste burning and biofuel usage are too low in the emission inventories. These common problems and possible causes found in multiple models point out directions for future model improvements in this important region.