887 resultados para Level-Set Method
Resumo:
Mixing layer height (MLH) is one of the key parameters in describing lower tropospheric dynamics and capturing its diurnal variability is crucial, especially for interpreting surface observations. In this paper we introduce a method for identifying MLH below the minimum range of a scanning Doppler lidar when operated at vertical. The method we propose is based on velocity variance in low-elevation-angle conical scanning and is applied to measurements in two very different coastal environments: Limassol, Cyprus, during summer and Loviisa, Finland, during winter. At both locations, the new method agrees well with MLH derived from turbulent kinetic energy dissipation rate profiles obtained from vertically pointing measurements. The low-level scanning routine frequently indicated non-zero MLH less than 100 m above the surface. Such low MLHs were more common in wintertime Loviisa on the Baltic Sea coast than during summertime in Mediterranean Limassol.
Resumo:
We present ocean model sensitivity experiments aimed at separating the influence of the projected changes in the “thermal” (near-surface air temperature) and “wind” (near-surface winds) forcing on the patterns of sea level and ocean heat content. In the North Atlantic, the distribution of sea level change is more due to the “thermal” forcing, whereas it is more due to the “wind” forcing in the North Pacific; in the Southern Ocean, the “thermal” and “wind” forcing have a comparable influence. In the ocean adjacent to Antarctica the “thermal” forcing leads to an inflow of warmer waters on the continental shelves, which is somewhat attenuated by the “wind” forcing. The structure of the vertically integrated heat uptake is set by different processes at low and high latitudes: at low latitudes it is dominated by the heat transport convergence, whereas at high latitudes it represents a small residual of changes in the surface flux and advection of heat. The structure of the horizontally integrated heat content tendency is set by the increase of downward heat flux by the mean circulation and comparable decrease of upward heat flux by the subgrid-scale processes; the upward eddy heat flux decreases and increases by almost the same magnitude in response to, respectively, the “thermal” and “wind” forcing. Regionally, the surface heat loss and deep convection weaken in the Labrador Sea, but intensify in the Greenland Sea in the region of sea ice retreat. The enhanced heat flux anomaly in the subpolar Atlantic is mainly caused by the “thermal” forcing.
Resumo:
A method is proposed for merging different nadir-sounding climate data records using measurements from high-resolution limb sounders to provide a transfer function between the different nadir measurements. The two nadir-sounding records need not be overlapping so long as the limb-sounding record bridges between them. The method is applied to global-mean stratospheric temperatures from the NOAA Climate Data Records based on the Stratospheric Sounding Unit (SSU) and the Advanced Microwave Sounding Unit-A (AMSU), extending the SSU record forward in time to yield a continuous data set from 1979 to present, and providing a simple framework for extending the SSU record into the future using AMSU. SSU and AMSU are bridged using temperature measurements from the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS), which is of high enough vertical resolution to accurately represent the weighting functions of both SSU and AMSU. For this application, a purely statistical approach is not viable since the different nadir channels are not sufficiently linearly independent, statistically speaking. The near-global-mean linear temperature trends for extended SSU for 1980–2012 are −0.63 ± 0.13, −0.71 ± 0.15 and −0.80 ± 0.17 K decade−1 (95 % confidence) for channels 1, 2 and 3, respectively. The extended SSU temperature changes are in good agreement with those from the Microwave Limb Sounder (MLS) on the Aura satellite, with both exhibiting a cooling trend of ~ 0.6 ± 0.3 K decade−1 in the upper stratosphere from 2004 to 2012. The extended SSU record is found to be in agreement with high-top coupled atmosphere–ocean models over the 1980–2012 period, including the continued cooling over the first decade of the 21st century.
Resumo:
During the last few years Enterprise Architecture has received increasing attention among industry and academia. Enterprise Architecture (EA) can be defined as (i) a formal description of the current and future state(s) of an organisation, and (ii) a managed change between these states to meet organisation’s stakeholders’ goals and to create value to the organisation. By adopting EA, organisations may gain a number of benefits such as better decision making, increased revenues and cost reductions, and alignment of business and IT. To increase the performance of public sector operations, and to improve public services and their availability, the Finnish Parliament has ratified the Act on Information Management Governance in Public Administration in 2011. The Act mandates public sector organisations to start adopting EA by 2014, including Higher Education Institutions (HEIs). Despite the benefits of EA and the Act, EA adoption level and maturity in Finnish HEIs are low. This is partly caused by the fact that EA adoption has been found to be difficult. Thus there is a need for a solution to help organisations to adopt EA successfully. This thesis follows Design Science (DS) approach to improve traditional EA adoption method in order to increase the likelihood of successful adoption. First a model is developed to explain the change resistance during EA adoption. To find out problems associated with EA adoption, an EA-pilot conducted in 2010 among 12 Finnish HEIs was analysed using the model. It was found that most of the problems were caused by misunderstood EA concepts, attitudes, and lack of skills. The traditional EA adoption method does not pay attention to these. To overcome the limitations of the traditional EA adoption method, an improved EA Adoption Method (EAAM) is introduced. By following EAAM, organisations may increase the likelihood of successful EA adoption. EAAM helps in acquiring the mandate for EA adoption from top-management, which has been found to be crucial to success. It also helps in supporting individual and organisational learning, which has also found to be essential in successful adoption.
Resumo:
Sclera segmentation is shown to be of significant importance for eye and iris biometrics. However, sclera segmentation has not been extensively researched as a separate topic, but mainly summarized as a component of a broader task. This paper proposes a novel sclera segmentation algorithm for colour images which operates at pixel-level. Exploring various colour spaces, the proposed approach is robust to image noise and different gaze directions. The algorithm’s robustness is enhanced by a two-stage classifier. At the first stage, a set of simple classifiers is employed, while at the second stage, a neural network classifier operates on the probabilities’ space generated by the classifiers at stage 1. The proposed method was ranked the 1st in Sclera Segmentation Benchmarking Competition 2015, part of BTAS 2015, with a precision of 95.05% corresponding to a recall of 94.56%.
Resumo:
Noncompetitive bids have recently become a major concern in both public and private sector construction contract auctions. Consequently, several models have been developed to help identify bidders potentially involved in collusive practices. However, most of these models require complex calculations and extensive information that is difficult to obtain. The aim of this paper is to utilize recent developments for detecting abnormal bids in capped auctions (auctions with an upper bid limit set by the auctioner) and extend them to the more conventional uncapped auctions (where no such limits are set). To accomplish this, a new method is developed for estimating the values of bid distribution supports by using the solution to what has become known as the German Tank problem. The model is then demonstrated and tested on a sample of real construction bid data, and shown to detect cover bids with high accuracy. This paper contributes to an improved understanding of abnormal bid behavior as an aid to detecting and monitoring potential collusive bid practices.
Resumo:
Abstract Background: The amount and structure of genetic diversity in dessert apple germplasm conserved at a European level is mostly unknown, since all diversity studies conducted in Europe until now have been performed on regional or national collections. Here, we applied a common set of 16 SSR markers to genotype more than 2,400 accessions across 14 collections representing three broad European geographic regions (North+East, West and South) with the aim to analyze the extent, distribution and structure of variation in the apple genetic resources in Europe. Results: A Bayesian model-based clustering approach showed that diversity was organized in three groups, although these were only moderately differentiated (FST=0.031). A nested Bayesian clustering approach allowed identification of subgroups which revealed internal patterns of substructure within the groups, allowing a finer delineation of the variation into eight subgroups (FST=0.044). The first level of stratification revealed an asymmetric division of the germplasm among the three groups, and a clear association was found with the geographical regions of origin of the cultivars. The substructure revealed clear partitioning of genetic groups among countries, but also interesting associations between subgroups and breeding purposes of recent cultivars or particular usage such as cider production. Additional parentage analyses allowed us to identify both putative parents of more than 40 old and/or local cultivars giving interesting insights in the pedigree of some emblematic cultivars. Conclusions: The variation found at group and sub-group levels may reflect a combination of historical processes of migration/selection and adaptive factors to diverse agricultural environments that, together with genetic drift, have resulted in extensive genetic variation but limited population structure. The European dessert apple germplasm represents an important source of genetic diversity with a strong historical and patrimonial value. The present work thus constitutes a decisive step in the field of conservation genetics. Moreover, the obtained data can be used for defining a European apple core collection useful for further identification of genomic regions associated with commercially important horticultural traits in apple through genome-wide association studies.
Resumo:
Evidence of jet precession in many galactic and extragalactic sources has been reported in the literature. Much of this evidence is based on studies of the kinematics of the jet knots, which depends on the correct identification of the components to determine their respective proper motions and position angles on the plane of the sky. Identification problems related to fitting procedures, as well as observations poorly sampled in time, may influence the follow-up of the components in time, which consequently might contribute to a misinterpretation of the data. In order to deal with these limitations, we introduce a very powerful statistical tool to analyse jet precession: the cross-entropy method for continuous multi-extremal optimization. Only based on the raw data of the jet components (right ascension and declination offsets from the core), the cross-entropy method searches for the precession model parameters that better represent the data. In this work we present a large number of tests to validate this technique, using synthetic precessing jets built from a given set of precession parameters. With the aim of recovering these parameters, we applied the cross-entropy method to our precession model, varying exhaustively the quantities associated with the method. Our results have shown that even in the most challenging tests, the cross-entropy method was able to find the correct parameters within a 1 per cent level. Even for a non-precessing jet, our optimization method could point out successfully the lack of precession.
Resumo:
Traditionally comparative cytogenetic studies are based mainly on banding patterns. Nevertheless, when dealing with species with highly rearranged genomes, as in Akodon species, or with other highly divergent species, cytogenetic comparisons of banding patterns prove inadequate. Hence, comparative chromosome painting has become the method of choice for genome comparisons at the cytogenetic level since it allows complete chromosome probes of a species to be hybridized in situ onto chromosomes of other species, detecting homologous genomic regions between them. In the present study, we have explored the highly rearranged complements of the Akodon species using reciprocal chromosome painting through species-specific chromosome probes obtained by chromosome sorting. The results revealed complete homology among the complements of Akodon sp. n. (ASP), 2n = 10; Akodon cursor (ACU), 2n = 15; Akodon montensis (AMO), 2n = 24; and Akodon paranaensis (APA), 2n = 44, and extensive chromosome rearrangements have been detected within the species with high precision. Robertsonian and tandem rearrangements, pericentric inversions and/or centromere repositioning, paracentric inversion, translocations, insertions, and breakpoints, where chromosomal rearrangements, seen to be favorable, were observed. Chromosome painting using the APA set of 21 autosomes plus X and Y revealed eight syntenic segments that are shared with A. montensis, A. cursor, and ASP, and one syntenic segment shared by A. montensis and A. cursor plus five exclusive chromosome associations for A. cursor and six for ASP chromosome X, except for the heterochromatin region of ASP X, and even chromosome Y shared complete homology among the species. These data indicate that all those closely related species have experienced a recent extensive process of autosomal rearrangement in which, except for ASP, there is still complete conservation of sex chromosomes homologies.
Resumo:
Our aim was to investigate the immediate effects of bilateral, 830 nm, low-level laser therapy (LLLT) on high-intensity exercise and biochemical markers of skeletal muscle recovery, in a randomised, double-blind, placebo-controlled, crossover trial set in a sports physiotherapy clinic. Twenty male athletes (nine professional volleyball players and eleven adolescent soccer players) participated. Active LLLT (830 nm wavelength, 100 mW, spot size 0.0028 cm(2), 3-4 J per point) or an identical placebo LLLT was delivered to five points in the rectus femoris muscle (bilaterally). The main outcome measures were the work performed in the Wingate test: 30 s of maximum cycling with a load of 7.5% of body weight, and the measurement of blood lactate (BL) and creatine kinase (CK) levels before and after exercise. There was no significant difference in the work performed during the Wingate test (P > 0.05) between subjects given active LLLT and those given placebo LLLT. For volleyball athletes, the change in CK levels from before to after the exercise test was significantly lower (P = 0.0133) for those given active LLLT (2.52 U l(-1) +/- 7.04 U l(-1)) than for those given placebo LLLT (28.49 U l(-1) +/- 22.62 U l(-1)). For the soccer athletes, the change in blood lactate levels from before exercise to 15 min after exercise was significantly lower (P < 0.01) in the group subjected to active LLLT (8.55 mmol l(-1) +/- 2.14 mmol l(-1)) than in the group subjected to placebo LLLT (10.52 mmol l(-1) +/- 1.82 mmol l(-1)). LLLT irradiation before the Wingate test seemed to inhibit an expected post-exercise increase in CK level and to accelerate post-exercise lactate removal without affecting test performance. These findings suggest that LLLT may be of benefit in accelerating post-exercise recovery.
Resumo:
In this paper, we compare the performance of two statistical approaches for the analysis of data obtained from the social research area. In the first approach, we use normal models with joint regression modelling for the mean and for the variance heterogeneity. In the second approach, we use hierarchical models. In the first case, individual and social variables are included in the regression modelling for the mean and for the variance, as explanatory variables, while in the second case, the variance at level 1 of the hierarchical model depends on the individuals (age of the individuals), and in the level 2 of the hierarchical model, the variance is assumed to change according to socioeconomic stratum. Applying these methodologies, we analyze a Colombian tallness data set to find differences that can be explained by socioeconomic conditions. We also present some theoretical and empirical results concerning the two models. From this comparative study, we conclude that it is better to jointly modelling the mean and variance heterogeneity in all cases. We also observe that the convergence of the Gibbs sampling chain used in the Markov Chain Monte Carlo method for the jointly modeling the mean and variance heterogeneity is quickly achieved.
Resumo:
We consider the two-level network design problem with intermediate facilities. This problem consists of designing a minimum cost network respecting some requirements, usually described in terms of the network topology or in terms of a desired flow of commodities between source and destination vertices. Each selected link must receive one of two types of edge facilities and the connection of different edge facilities requires a costly and capacitated vertex facility. We propose a hybrid decomposition approach which heuristically obtains tentative solutions for the vertex facilities number and location and use these solutions to limit the computational burden of a branch-and-cut algorithm. We test our method on instances of the power system secondary distribution network design problem. The results show that the method is efficient both in terms of solution quality and computational times. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The NMR spin coupling parameters, (1)J(N,H) and (2)J(H,H), and the chemical shielding, sigma((15)N), of liquid ammonia are studied from a combined and sequential QM/MM methodology. Monte Carlo simulations are performed to generate statistically uncorrelated configurations that are submitted to density functional theory calculations. Two different Lennard-Jones potentials are used in the liquid simulations. Electronic polarization is included in these two potentials via an iterative procedure with and without geometry relaxation, and the influence on the calculated properties are analyzed. B3LYP/aug-cc-pVTZ-J calculations were used to compute the V(N,H) constants in the interval of -67.8 to -63.9 Hz, depending on the theoretical model used. These can be compared with the experimental results of -61.6 Hz. For the (2)J(H,H) coupling the theoretical results vary between -10.6 to -13.01 Hz. The indirect experimental result derived from partially deuterated liquid is -11.1 Hz. Inclusion of explicit hydrogen bonded molecules gives a small but important contribution. The vapor-to-liquid shifts are also considered. This shift is calculated to be negligible for (1)J(N,H) in agreement with experiment. This is rationalized as a cancellation of the geometry relaxation and pure solvent effects. For the chemical shielding, U(15 N) Calculations at the B3LYP/aug-pcS-3 show that the vapor-to-liquid chemical shift requires the explicit use of solvent molecules. Considering only one ammonia molecule in an electrostatic embedding gives a wrong sign for the chemical shift that is corrected only with the use of explicit additional molecules. The best result calculated for the vapor to liquid chemical shift Delta sigma((15)N) is -25.2 ppm, in good agreement with the experimental value of -22.6 ppm.
Resumo:
Low-level laser therapy (LLLT), also referred to as therapeutic laser, has been recommended for a wide array of clinical procedures, among which the treatment of dentinal hypersensitivity. However, the mechanism that guides this process remains unknown. Therefore, the objective of this study was to evaluate in vitro the effects of LLL irradiation on cell metabolism (MTT assay), alkaline phosphatase (ALP) expression and total protein synthesis. The expression of genes that encode for collagen type-1 (Col-1) and fibronectin (FN) was analyzed by RT-PCR. For such purposes, oclontoblast-like cell line (MDPC-23) was previously cultured in Petri dishes (15000 cells/cm(2)) and submitted to stress conditions during 12 h. Thereafter, 6 applications with a monochromatic near infrared radiation (GaAlAs) set at predetermined parameters were performed at 12-h intervals. Non-irradiated cells served as a control group. Neither the MTT values nor the total protein levels of the irradiated group differed significantly from those of the control group (Mann-Whitney test; p > 0.05). On the other hand, the irradiated cells showed a decrease in ALP activity (Mann-Whitney test; p < 0.05). RT-PCR results demonstrated a trend to a specific reduction in gene expression after cell irradiation, though not significant statistically (Mann-Whitney test; p > 0.05). It may be concluded that, under the tested conditions, the LLLT parameters used in the present study did not influence cell metabolism, but reduced slightly the expression of some specific proteins.
Resumo:
Augmented Lagrangian methods for large-scale optimization usually require efficient algorithms for minimization with box constraints. On the other hand, active-set box-constraint methods employ unconstrained optimization algorithms for minimization inside the faces of the box. Several approaches may be employed for computing internal search directions in the large-scale case. In this paper a minimal-memory quasi-Newton approach with secant preconditioners is proposed, taking into account the structure of Augmented Lagrangians that come from the popular Powell-Hestenes-Rockafellar scheme. A combined algorithm, that uses the quasi-Newton formula or a truncated-Newton procedure, depending on the presence of active constraints in the penalty-Lagrangian function, is also suggested. Numerical experiments using the Cute collection are presented.