827 resultados para Multiple-scale processing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sustainable yields from water wells in hard-rock aquifers are achieved when the well bore intersects fracture networks. Fracture networks are often not readily discernable at the surface. Lineament analysis using remotely sensed satellite imagery has been employed to identify surface expressions of fracturing, and a variety of image-analysis techniques have been successfully applied in “ideal” settings. An ideal setting for lineament detection is where the influences of human development, vegetation, and climatic situations are minimal and hydrogeological conditions and geologic structure are known. There is not yet a well-accepted protocol for mapping lineaments nor have different approaches been compared in non-ideal settings. A new approach for image-processing/synthesis was developed to identify successful satellite imagery types for lineament analysis in non-ideal terrain. Four satellite sensors (ASTER, Landsat7 ETM+, QuickBird, RADARSAT-1) and a digital elevation model were evaluated for lineament analysis in Boaco, Nicaragua, where the landscape is subject to varied vegetative cover, a plethora of anthropogenic features, and frequent cloud cover that limit the availability of optical satellite data. A variety of digital image processing techniques were employed and lineament interpretations were performed to obtain 12 complementary image products that were evaluated subjectively to identify lineaments. The 12 lineament interpretations were synthesized to create a raster image of lineament zone coincidence that shows the level of agreement among the 12 interpretations. A composite lineament interpretation was made using the coincidence raster to restrict lineament observations to areas where multiple interpretations (at least 4) agree. Nine of the 11 previously mapped faults were identified from the coincidence raster. An additional 26 lineaments were identified from the coincidence raster, and the locations of 10 were confirmed by field observation. Four manual pumping tests suggest that well productivity is higher for wells proximal to lineament features. Interpretations from RADARSAT-1 products were superior to interpretations from other sensor products, suggesting that quality lineament interpretation in this region requires anthropogenic features to be minimized and topographic expressions to be maximized. The approach developed in this study has the potential to improve siting wells in non-ideal regions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation presents the competitive control methodologies for small-scale power system (SSPS). A SSPS is a collection of sources and loads that shares a common network which can be isolated during terrestrial disturbances. Micro-grids, naval ship electric power systems (NSEPS), aircraft power systems and telecommunication system power systems are typical examples of SSPS. The analysis and development of control systems for small-scale power systems (SSPS) lacks a defined slack bus. In addition, a change of a load or source will influence the real time system parameters of the system. Therefore, the control system should provide the required flexibility, to ensure operation as a single aggregated system. In most of the cases of a SSPS the sources and loads must be equipped with power electronic interfaces which can be modeled as a dynamic controllable quantity. The mathematical formulation of the micro-grid is carried out with the help of game theory, optimal control and fundamental theory of electrical power systems. Then the micro-grid can be viewed as a dynamical multi-objective optimization problem with nonlinear objectives and variables. Basically detailed analysis was done with optimal solutions with regards to start up transient modeling, bus selection modeling and level of communication within the micro-grids. In each approach a detail mathematical model is formed to observe the system response. The differential game theoretic approach was also used for modeling and optimization of startup transients. The startup transient controller was implemented with open loop, PI and feedback control methodologies. Then the hardware implementation was carried out to validate the theoretical results. The proposed game theoretic controller shows higher performances over traditional the PI controller during startup. In addition, the optimal transient surface is necessary while implementing the feedback controller for startup transient. Further, the experimental results are in agreement with the theoretical simulation. The bus selection and team communication was modeled with discrete and continuous game theory models. Although players have multiple choices, this controller is capable of choosing the optimum bus. Next the team communication structures are able to optimize the players’ Nash equilibrium point. All mathematical models are based on the local information of the load or source. As a result, these models are the keys to developing accurate distributed controllers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Important food crops like rice are constantly exposed to various stresses that can have devastating effect on their survival and productivity. Being sessile, these highly evolved organisms have developed elaborate molecular machineries to sense a mixture of stress signals and elicit a precise response to minimize the damage. However, recent discoveries revealed that the interplay of these stress regulatory and signaling molecules is highly complex and remains largely unknown. In this work, we conducted large scale analysis of differential gene expression using advanced computational methods to dissect regulation of stress response which is at the heart of all molecular changes leading to the observed phenotypic susceptibility. One of the most important stress conditions in terms of loss of productivity is drought. We performed genomic and proteomic analysis of epigenetic and miRNA mechanisms in regulation of drought responsive genes in rice and found subsets of genes with striking properties. Overexpressed genesets included higher number of epigenetic marks, miRNA targets and transcription factors which regulate drought tolerance. On the other hand, underexpressed genesets were poor in above features but were rich in number of metabolic genes with multiple co-expression partners contributing majorly towards drought resistance. Identification and characterization of the patterns exhibited by differentially expressed genes hold key to uncover the synergistic and antagonistic components of the cross talk between stress response mechanisms. We performed meta-analysis on drought and bacterial stresses in rice and Arabidopsis, and identified hundreds of shared genes. We found high level of conservation of gene expression between these stresses. Weighted co-expression network analysis detected two tight clusters of genes made up of master transcription factors and signaling genes showing strikingly opposite expression status. To comprehensively identify the shared stress responsive genes between multiple abiotic and biotic stresses in rice, we performed meta-analyses of microarray studies from seven different abiotic and six biotic stresses separately and found more than thirteen hundred shared stress responsive genes. Various machine learning techniques utilizing these genes classified the stresses into two major classes' namely abiotic and biotic stresses and multiple classes of individual stresses with high accuracy and identified the top genes showing distinct patterns of expression. Functional enrichment and co-expression network analysis revealed the different roles of plant hormones, transcription factors in conserved and non-conserved genesets in regulation of stress response.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, we consider Bayesian inference on the detection of variance change-point models with scale mixtures of normal (for short SMN) distributions. This class of distributions is symmetric and thick-tailed and includes as special cases: Gaussian, Student-t, contaminated normal, and slash distributions. The proposed models provide greater flexibility to analyze a lot of practical data, which often show heavy-tail and may not satisfy the normal assumption. As to the Bayesian analysis, we specify some prior distributions for the unknown parameters in the variance change-point models with the SMN distributions. Due to the complexity of the joint posterior distribution, we propose an efficient Gibbs-type with Metropolis- Hastings sampling algorithm for posterior Bayesian inference. Thereafter, following the idea of [1], we consider the problems of the single and multiple change-point detections. The performance of the proposed procedures is illustrated and analyzed by simulation studies. A real application to the closing price data of U.S. stock market has been analyzed for illustrative purposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A randomised, placebo-controlled, double blind study was conducted on 25 dogs that had atopic dermatitis, together with skin test reactivity and elevated serum IgE to Dermatophagoides farinae (Df) and at least one additional allergen. Dogs were treated with either a Df-restricted immunotherapy solution (n=14) or a placebo (n=11) and evaluated 6 weeks and 3, 5, 7 and 9 months after the initiation of treatment using a clinical scoring system (SASSAD) and pruritus analogue scale scores. The Df-restricted solution and the placebo had an equal effect on both pruritus and the skin manifestations (P>0.05). The results of this study indicate that in dogs with atopic dermatitis based on hypersensitivity to environmental allergens in addition to D. farinae, Df-restricted immunotherapy is insufficient to control the disease. Consequently, a solution for allergen-specific immunotherapy should remain customised.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current advanced cloud infrastructure management solutions allow scheduling actions for dynamically changing the number of running virtual machines (VMs). This approach, however, does not guarantee that the scheduled number of VMs will properly handle the actual user generated workload, especially if the user utilization patterns will change. We propose using a dynamically generated scaling model for the VMs containing the services of the distributed applications, which is able to react to the variations in the number of application users. We answer the following question: How to dynamically decide how many services of each type are needed in order to handle a larger workload within the same time constraints? We describe a mechanism for dynamically composing the SLAs for controlling the scaling of distributed services by combining data analysis mechanisms with application benchmarking using multiple VM configurations. Based on processing of multiple application benchmarks generated data sets we discover a set of service monitoring metrics able to predict critical Service Level Agreement (SLA) parameters. By combining this set of predictor metrics with a heuristic for selecting the appropriate scaling-out paths for the services of distributed applications, we show how SLA scaling rules can be inferred and then used for controlling the runtime scale-in and scale-out of distributed services. We validate our architecture and models by performing scaling experiments with a distributed application representative for the enterprise class of information systems. We show how dynamically generated SLAs can be successfully used for controlling the management of distributed services scaling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Covert brain activity related to task-free, spontaneous (i.e. unrequested), emotional evaluation of human face images was analysed in 27-channel averaged event-related potential (ERP) map series recorded from 18 healthy subjects while observing random sequences of face images without further instructions. After recording, subjects self-rated each face image on a scale from “liked” to “disliked”. These ratings were used to dichotomize the face images into the affective evaluation categories of “liked” and “disliked” for each subject and the subjects into the affective attitudes of “philanthropists” and “misanthropists” (depending on their mean rating across images). Event-related map series were averaged for “liked” and “disliked” face images and for “philanthropists” and “misanthropists”. The spatial configuration (landscape) of the electric field maps was assessed numerically by the electric gravity center, a conservative estimate of the mean location of all intracerebral, active, electric sources. Differences in electric gravity center location indicate activity of different neuronal populations. The electric gravity center locations of all event-related maps were averaged over the entire stimulus-on time (450 ms). The mean electric gravity center for disliked faces was located (significant across subjects) more to the right and somewhat more posterior than for liked faces. Similar differences were found between the mean electric gravity centers of misanthropists (more right and posterior) and philanthropists. Our neurophysiological findings are in line with neuropsychological findings, revealing visual emotional processing to depend on affective evaluation category and affective attitude, and extending the conclusions to a paradigm without directed task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND A newly developed collagen matrix (CM) of porcine origin has been shown to represent a potential alternative to palatal connective tissue grafts (CTG) for the treatment of single Miller Class I and II gingival recessions when used in conjunction with a coronally advanced flap (CAF). However, at present it remains unknown to what extent CM may represent a valuable alternative to CTG in the treatment of Miller Class I and II multiple adjacent gingival recessions (MAGR). The aim of this study was to compare the clinical outcomes following treatment of Miller Class I and II MAGR using the modified coronally advanced tunnel technique (MCAT) in conjunction with either CM or CTG. METHODS Twenty-two patients with a total of 156 Miller Class I and II gingival recessions were included in this study. Recessions were randomly treated according to a split-mouth design by means of MCAT + CM (test) or MCAT + CTG (control). The following measurements were recorded at baseline (i.e. prior to surgery) and at 12 months: Gingival Recession Depth (GRD), Probing Pocket Depth (PD), Clinical Attachment Level (CAL), Keratinized Tissue Width (KTW), Gingival Recession Width (GRW) and Gingival Thickness (GT). GT was measured 3-mm apical to the gingival margin. Patient acceptance was recorded using a Visual Analogue Scale (VAS). The primary outcome variable was Complete Root Coverage (CRC), secondary outcomes were Mean Root Coverage (MRC), change in KTW, GT, patient acceptance and duration of surgery. RESULTS Healing was uneventful in both groups. No adverse reactions at any of the sites were observed. At 12 months, both treatments resulted in statistically significant improvements of CRC, MRC, KTW and GT compared with baseline (p < 0.05). CRC was found at 42% of test sites and at 85% of control sites respectively (p < 0.05). MRC measured 71 ± 21% mm at test sites versus 90 ± 18% mm at control sites (p < 0.05). Mean KTW measured 2.4 ± 0.7 mm at test sites versus 2.7 ± 0.8 mm at control sites (p > 0.05). At test sites, GT values changed from 0.8 ± 0.2 to 1.0 ± 0.3 mm, and at control sites from 0.8 ± 0.3 to 1.3 ± 0.4 mm (p < 0.05). Duration of surgery and patient morbidity was statistically significantly lower in the test compared with the control group respectively (p < 0.05). CONCLUSIONS The present findings indicate that the use of CM may represent an alternative to CTG by reducing surgical time and patient morbidity, but yielded lower CRC than CTG in the treatment of Miller Class I and II MAGR when used in conjunction with MCAT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nonlinear computational analysis of materials showing elasto-plasticity or damage relies on knowledge of their yield behavior and strengths under complex stress states. In this work, a generalized anisotropic quadric yield criterion is proposed that is homogeneous of degree one and takes a convex quadric shape with a smooth transition from ellipsoidal to cylindrical or conical surfaces. If in the case of material identification, the shape of the yield function is not known a priori, a minimization using the quadric criterion will result in the optimal shape among the convex quadrics. The convexity limits of the criterion and the transition points between the different shapes are identified. Several special cases of the criterion for distinct material symmetries such as isotropy, cubic symmetry, fabric-based orthotropy and general orthotropy are presented and discussed. The generality of the formulation is demonstrated by showing its degeneration to several classical yield surfaces like the von Mises, Drucker–Prager, Tsai–Wu, Liu, generalized Hill and classical Hill criteria under appropriate conditions. Applicability of the formulation for micromechanical analyses was shown by transformation of a criterion for porous cohesive-frictional materials by Maghous et al. In order to demonstrate the advantages of the generalized formulation, bone is chosen as an example material, since it features yield envelopes with different shapes depending on the considered length scale. A fabric- and density-based quadric criterion for the description of homogenized material behavior of trabecular bone is identified from uniaxial, multiaxial and torsional experimental data. Also, a fabric- and density-based Tsai–Wu yield criterion for homogenized trabecular bone from in silico data is converted to an equivalent quadric criterion by introduction of a transformation of the interaction parameters. Finally, a quadric yield criterion for lamellar bone at the microscale is identified from a nanoindentation study reported in the literature, thus demonstrating the applicability of the generalized formulation to the description of the yield envelope of bone at multiple length scales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by process-based modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM). The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under http://www.flow-r.org) and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws. We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10 m DEM resolution as a good compromise between processing time and quality of results. However, valuable results have still been obtained on the basis of lower quality DEMs with 25 m resolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The single Hochdorf burial was found in 1887 during construction work in the Canton of Lucerne, Switzerland. It dates from between 320 and 250 BC. The calvarium, the left half of the pelvis and the left femur were preserved. The finding shows an unusual bony alteration of the skull. The aim of this study was to obtain a differential diagnosis and to examine the skull using various methods. Sex and age were determined anthropologically. Radiological examinations were performed with plain X-ray imaging and a multislice computed tomography (CT) scanner. For histological analysis, samples of the lesion were taken. The pathological processing included staining after fixation, decalcification, and paraffin embedding. Hard-cut sections were also prepared. The individual was female. The age at death was between 30 and 50 years. There is an intensely calcified bone proliferation at the right side of the os frontalis. Plain X-ray and CT imaging showed a large sclerotic lesion in the area of the right temple with a partly bulging appearance. The inner boundary of the lesion shows multi-edged irregularities. There is a diffuse thickening of the right side. In the left skull vault, there is a mix of sclerotic areas and areas which appear to be normal with a clear differentiation between tabula interna, diploë and tabula externa. Histology showed mature organised bone tissue. Radiological and histological findings favour a benign condition. Differential diagnoses comprise osteomas which may occur, for example, in the setting of hereditary adenomatous polyposis coli related to Gardner syndrome.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE Texture analysis is an alternative method to quantitatively assess MR-images. In this study, we introduce dynamic texture parameter analysis (DTPA), a novel technique to investigate the temporal evolution of texture parameters using dynamic susceptibility contrast enhanced (DSCE) imaging. Here, we aim to introduce the method and its application on enhancing lesions (EL), non-enhancing lesions (NEL) and normal appearing white matter (NAWM) in multiple sclerosis (MS). METHODS We investigated 18 patients with MS and clinical isolated syndrome (CIS), according to the 2010 McDonald's criteria using DSCE imaging at different field strengths (1.5 and 3 Tesla). Tissues of interest (TOIs) were defined within 27 EL, 29 NEL and 37 NAWM areas after normalization and eight histogram-based texture parameter maps (TPMs) were computed. TPMs quantify the heterogeneity of the TOI. For every TOI, the average, variance, skewness, kurtosis and variance-of-the-variance statistical parameters were calculated. These TOI parameters were further analyzed using one-way ANOVA followed by multiple Wilcoxon sum rank testing corrected for multiple comparisons. RESULTS Tissue- and time-dependent differences were observed in the dynamics of computed texture parameters. Sixteen parameters discriminated between EL, NEL and NAWM (pAVG = 0.0005). Significant differences in the DTPA texture maps were found during inflow (52 parameters), outflow (40 parameters) and reperfusion (62 parameters). The strongest discriminators among the TPMs were observed in the variance-related parameters, while skewness and kurtosis TPMs were in general less sensitive to detect differences between the tissues. CONCLUSION DTPA of DSCE image time series revealed characteristic time responses for ELs, NELs and NAWM. This may be further used for a refined quantitative grading of MS lesions during their evolution from acute to chronic state. DTPA discriminates lesions beyond features of enhancement or T2-hypersignal, on a numeric scale allowing for a more subtle grading of MS-lesions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multiple sclerosis (MS) is the most common demyelinating disease affecting the central nervous system. There is no cure for MS and current therapies have limited efficacy. While the majority of individuals with MS develop significant clinical disability, a subset experiences a disease course with minimal impairment even in the presence of significant apparent tissue damage on magnetic resonance imaging (MRI). The current studies combined functional MRI and diffusion tensor imaging (DTI) to elucidate brain mechanisms associated with lack of clinical disability in patients with MS. Recent evidence has implicated cortical reorganization as a mechanism to limit the clinical manifestation of the disease. Functional MRI was used to test the hypothesis that non-disabled MS patients (Expanded Disability Status Scale ≤ 1.5) show increased recruitment of cognitive control regions (dorsolateral prefrontal and anterior cingulate cortex) while performing sensory, motor and cognitive tasks. Compared to matched healthy controls, patients increased activation of cognitive control brain regions when performing non-dominant hand movements and the 2-back working memory task. Using dynamic causal modeling, we tested whether increased cognitive control recruitment is associated with alterations in connectivity in the working memory functional network. Patients exhibited similar network connectivity to that of control subjects when performing working memory tasks. We subsequently investigated the integrity of major white matter tracts to assess structural connectivity and its relation to activation and functional integration of the cognitive control system. Patients showed substantial alterations in callosal, inferior and posterior white matter tracts and less pronounced involvement of the corticospinal tracts and superior longitudinal fasciculi (SLF). Decreased structural integrity within the right SLF in patients was associated with decreased performance, and decreased activation and connectivity of the cognitive control system when performing working memory tasks. These studies suggest that patient with MS without clinical disability increase cognitive control system recruitment across functional domains and rely on preserved functional and structural connectivity of brain regions associated with this network. Moreover, the current studies show the usefulness of combining brain activation data from functional MRI and structural connectivity data from DTI to improve our understanding of brain adaptation mechanisms to neurological disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The task of encoding and processing complex sensory input requires many types of transsynaptic signals. This requirement is served in part by an extensive group of neurotransmitter substances which may include thirty or more different compounds. At the next level of information processing, the existence of multiple receptors for a given neurotransmitter appears to be a widely used mechanism to generate multiple responses to a given first messenger (Snyder and Goodman, 1980). Despite the wealth of published data on GABA receptors, the existence of more than one GABA receptor was in doubt until the mid 1980's. Presently there is still disagreement on the number of types of GABA receptors, estimates for which range from two to four (DeFeudis, 1983; Johnston, 1985). Part of the problem in evaluating data concerning multiple receptor types is the lack of information on the number of gene products and their subsequent supramolecular organization in different neurons. In order to evaluate the question concerning the diversity of GABA receptors in the nervous system, we must rely on indirect information derived from a wide variety of experimental techniques. These include pharmacological binding studies to membrane fractions, electrophysiological studies, localization studies, purification studies, and functional assays. Almost all parts of the central and peripheral nervous system use GABA as a neurotransmitter, and these experimental techniques have therefore been applied to many different parts of the nervous system for the analysis of GABA receptor characteristics. We are left with a large amount of data from a wide variety of techniques derived from many parts of the nervous system. When this project was initiated in 1983, there were only a handful of pharmacological tools to assess the question of multiple GABA receptors. The approach adopted was to focus on a single model system, using a variety of experimental techniques, in order to evaluate the existence of multiple forms of GABA receptors. Using the in vitro rabbit retina, a combination of pharmacological binding studies, functional release studies and partial purification studies were undertaken to examine the GABA receptor composition of this tissue. Three types of GABA receptors were observed: Al receptors coupled to benzodiazepine and barbiturate modulation, and A2 or uncoupled GABA-A receptors, and GABA-B receptors. These results are evaluated and discussed in light of recent findings by others concerning the number and subtypes of GABA receptors in the nervous system. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous studies have either exclusively used annual tree-ring data or have combined tree-ring series with other, lower temporal resolution proxy series. Both approaches can lead to significant uncertainties, as tree-rings may underestimate the amplitude of past temperature variations, and the validity of non-annual records cannot be clearly assessed. In this study, we assembled 45 published Northern Hemisphere (NH) temperature proxy records covering the past millennium, each of which satisfied 3 essential criteria: the series must be of annual resolution, span at least a thousand years, and represent an explicit temperature signal. Suitable climate archives included ice cores, varved lake sediments, tree-rings and speleothems. We reconstructed the average annual land temperature series for the NH over the last millennium by applying 3 different reconstruction techniques: (1) principal components (PC) plus second-order autoregressive model (AR2), (2) composite plus scale (CPS) and (3) regularized errors-in-variables approach (EIV). Our reconstruction is in excellent agreement with 6 climate model simulations (including the first 5 models derived from the fifth phase of the Coupled Model Intercomparison Project (CMIP5) and an earth system model of intermediate complexity (LOVECLIM), showing similar temperatures at multi-decadal timescales; however, all simulations appear to underestimate the temperature during the Medieval Warm Period (MWP). A comparison with other NH reconstructions shows that our results are consistent with earlier studies. These results indicate that well-validated annual proxy series should be used to minimize proxy-based artifacts, and that these proxy series contain sufficient information to reconstruct the low-frequency climate variability over the past millennium.