914 resultados para Empirical Mode Decomposition, vibration-based analysis, damage detection, signal decomposition
Resumo:
Este estudio tiene como objetivo identificar cuáles son las variables que repercuten en la efectividad de las redes empresariales. Esto, con base en la búsqueda de literatura existente de la efectividad en equipos, en organizaciones y en las redes interorganizacionales, así como el análisis de modelos y estudios empíricos que permitieron el análisis. De acuerdo con la búsqueda, se encontró que variables como la estructura de la red, la estabilidad del sistema, el compromiso de los empleados en cada una de las organizaciones que hacen parte de la red, la confianza dentro de la red, la transferencia de conocimiento y la apertura del sistema son las variables que en conclusión, mostraron ser buenas predictoras de efectividad dentro de las redes empresariales.
Resumo:
El diagnóstico de cáncer de tiroides se ha incrementado y las posibilidades de detección de una enfermedad subclínica son altas, toda vez que disponemos de herramientas de detección más sensibles y de fácil acceso. Por ende, el clínico requiere conocer la historia natural del nódulo tiroideo y del carcinoma papilar de tiroides de bajo riesgo para brindar a su paciente el mejor tratamiento basado en la evidencia clínica. El objetivo de esta revision es reconocer los elementos clínicos que han condicionado el aumento inusitado de casos de cáncer de tiroides. Conclusión: El sobrediagnóstico del cáncer de tiroides es una realidad, que se posibilita por el uso extendido de biopsia por aspiración con aguja fina ((BACAF)) después de la detección de un nódulo tiroideo, en gran parte de manera incidental, sin acarrear la mayoría de las veces un mejor pronóstico después de su tratamiento.
Resumo:
Molts bacteris del grup fluorescent del gènere Pseudomonas són capaços de controlar malalties de les plantes causades per fongs i bacteris fitopatògens (ACBs) o mostren activitat com a bacteris promotors del creixement de les plantes (BPCPs). S'han descrit diversos metabòlits que intervenen de manera important en la seva activitat com a ACBs i BPCPs entre els quals en destaquen el 2,4-diacetilfloroglucinol (Phl), àcid fenazin-1-carboxílic (PCA), Pirrolnitrina (Prn), àcid cianhídric (HCN), àcid 3-indolacètic (IAA), sideròfors i quitinases. L'objectiu principal del nostre treball ha estat la comparació de les característiques d'un grup de Pseudomonas del grup fluorescent utilitzant una aproximació polifàsica amb la finalitat d'establir possibles relacions entre algunes de les característiques i la capacitat d'actuar com a ACB o BPCP. Atesa la importància en el biocontrol de la producció de metabòlits com Phl, PCA i Prn, l'objectiu preliminar ha estat la recerca i obtenció de soques productores d'aquests metabòlits. Per assolir aquest objectiu s'ha emprat una aproximació molecular basada en la detecció dels gens biosintètics implicats en la seva producció en lloc de la detecció directa dels metabòlits per evitar els efectes que poden tenir les condicions de cultiu en la inducció o repressió de la seva síntesi. S'han realitzat diferents protocols basats (i) en la cerca assistida de productors mitjançant l'ús de marcadors fenotípics i posterior confirmació per PCR i, (ii) en l'ús de la PCR per a la detecció dels gens directament dels extractes bacterians, d'enriquiments d'aquests extractes i la realització de la hibridació en colònies per al posterior aïllament. La cerca assistida de productors de Phl mitjançant marcadors fenotípics i posteriorment la utilització de tècniques moleculars (amplificació per PCR del gen phlD), ha estat el millor mètode en el tipus de mostres processades en el nostre treball, on la proporció de productors és relativament baixa. En total s'han aïllat a partir de diversos ambients 4 soques portadores dels gens de la síntesi de PCA, 15 de Phl i 1 de Prn. S'ha constituït una col·lecció de 72 soques de Pseudomonas del grup fluorescent que inclou 18 aïllats propis portadors dels gens biosintètics necessaris per la producció de Phl PCA i Prn; 6 soques de referència procedents de col·leccions de cultius tipus, 14 soques productores dels diferents antibiòtics cedides per altres investigadors i una selecció de 34 soques procedents d'un treball previ realitzat en el nostre grup de recerca. A la col·lecció s'hi troben soques candidates a ACB i BPCP de diverses malalties i plantes. Les 72 soques s'han caracteritzat fenotípica i genotípicament. La caracterització fenotípica s'ha portat a terme mitjançant la identificació a nivell d'espècie amb galeries API 20NE i proves bioquímiques específiques; la producció de metabòlits com PCA, Phl, Prn, IAA, HCN, quitinases i sideròfors mitjançant l'ús de diferents tècniques; antagonisme in vitro en diversos medis enfront dos fongs (Stemphylium vesicarium i Penicillium expansum) i tres bacteris fitopatògens (Erwinia amylovora, Pseudomonas syringae pv. syringae i Xanthomonas arboricola pv. juglandis); l'eficàcia de la inhibició de la infecció en bioassaigs in vivo sobre material vegetal enfront els fongs P. expansum en poma i S. vesicarium en fulles de perera i enfront el bacteri E. amylovora en fruits immadurs de perera i, finalment, en assaigs de promoció de creixement en dos portaempelts comercials de Prunus. Cal destacar que P. expansum causa la podridura blava en pomes i peres en postcollita, S. vesicarium la taca bruna de la perera i E. amylovora el foc bacterià de les rosàcies. El nombre de soques de Pseudomonas, sobre el total de les 72 estudiades, productores d'IAA (4) i quitinases (6) és baix, mentre que és elevat en el cas del HCN (32), que a més està associat a la producció de Phl. Els resultats obtinguts en l'antagonisme in vitro han mostrat en el cas dels bacteris que és dependent del patogen indicador i del medi de cultiu. La presència o absència de ferro no sembla ser un factor que potencií l'antagonisme. En el cas dels fongs no s'ha observat però, influència del medi de cultiu emprat. En el total de 72 soques s'ha observat un percentatge baix de soques que manifesten antagonisme en tots els medis assajats vers 3 o 4 dels patògens (7). Solament 2 d'aquestes 7 soques han mostrat ser també efectives en bioassaigs d'inhibició de les infeccions causades per 2 dels 3 patògens assajats. Algunes de les soques efectives en els bioassaigs no són antagonistes in vitro en cap dels medis assajats enfront el mateix patogen. En el cas de la promoció del creixement, s'han observat més soques promotores del creixement del portaempelts de prunera Marianna 2624 que no en l'híbrid de presseguer-ametller GF677 i les eficàcies assolides són també majors en el cas de Marianna 2624, detectant una elevada especificitat soca/portaempelts La caracterització genotípica s'ha realitzat mitjançant l'anàlisi dels polimorfismes en la longitud dels fragments de restricció de DNA ribosomal (RFLP-rDNA) i l'anàlisi dels polimorfismes en la longitud dels fragments de macrorestricció genòmica de DNA cromosòmic separats per electroforesi en camp polsant (MRFLP-PFGE). Ambdues anàlisis van mostrar una gran heterogeneïtat genètica entre les soques caracteritzades i no s'ha pogut relacionar les agrupacions obtingudes amb les característiques fenotípiques o capacitat d'actuar com a ACB o BPCP. Els patrons de macrorestricció genòmica (MRFLP-PFGE) del bacteri model P. fluorescens EPS288 són estables en el temps i independents de les condicions de cultiu assajades al laboratori o en mostres naturals, mostrant ser una tècnica eficaç en la identificació de reaïllats de mostres naturals inoculades prèviament amb el bacteri. Una selecció de soques que comparteixen el fet de produir floroglucinol s'han caracteritzat mitjançant RFLP i seqüenciació del gen phlD. S'ha establert una relació entre les agrupacions obtingudes en les anàlisis RFLP-rDNA, RFLP-phlD i les seqüències del gen. En l'anàlisi filogenètica de les seqüències del gen phlD s'ha observat un elevat grau de polimorfisme obtenint-se 3 agrupacions principals. Les agrupacions semblen relacionar-se amb els patrons de producció de metabòlits (Phl, HCN i Prn en una primera agrupació; Phl i HCN en la segona i solament Phl en la tercera), però aquestes no s'han pogut relacionar amb l'origen geogràfic de les soques o la seva activitat com a ACBs i/o BPCP. Amb les dades obtingudes de la caracterització fenotípica i genotípica s'ha realitzat una anàlisi multivariant (correspondències, correlacions d'Spearman i de freqüències amb variables categòriques). S'ha demostrat la importància de disposar d'una tècnica que permeti depurar una col·lecció de soques descartant les soques genèticament idèntiques, ja que influeixen en els resultats de les anàlisis. Pels tres patògens assajats com a indicadors i els dos portaempelts emprats, no s'ha observat cap correlació entre la inhibició de la infecció o la promoció del creixement amb les característiques fenotípiques i genotípiques de les soques que fos significatiu i consistent en les tres tècniques emprades.
Resumo:
A traditional method of validating the performance of a flood model when remotely sensed data of the flood extent are available is to compare the predicted flood extent to that observed. The performance measure employed often uses areal pattern-matching to assess the degree to which the two extents overlap. Recently, remote sensing of flood extents using synthetic aperture radar (SAR) and airborne scanning laser altimetry (LIDAR) has made more straightforward the synoptic measurement of water surface elevations along flood waterlines, and this has emphasised the possibility of using alternative performance measures based on height. This paper considers the advantages that can accrue from using a performance measure based on waterline elevations rather than one based on areal patterns of wet and dry pixels. The two measures were compared for their ability to estimate flood inundation uncertainty maps from a set of model runs carried out to span the acceptable model parameter range in a GLUE-based analysis. A 1 in 5-year flood on the Thames in 1992 was used as a test event. As is typical for UK floods, only a single SAR image of observed flood extent was available for model calibration and validation. A simple implementation of a two-dimensional flood model (LISFLOOD-FP) was used to generate model flood extents for comparison with that observed. The performance measure based on height differences of corresponding points along the observed and modelled waterlines was found to be significantly more sensitive to the channel friction parameter than the measure based on areal patterns of flood extent. The former was able to restrict the parameter range of acceptable model runs and hence reduce the number of runs necessary to generate an inundation uncertainty map. A result of this was that there was less uncertainty in the final flood risk map. The uncertainty analysis included the effects of uncertainties in the observed flood extent as well as in model parameters. The height-based measure was found to be more sensitive when increased heighting accuracy was achieved by requiring that observed waterline heights varied slowly along the reach. The technique allows for the decomposition of the reach into sections, with different effective channel friction parameters used in different sections, which in this case resulted in lower r.m.s. height differences between observed and modelled waterlines than those achieved by runs using a single friction parameter for the whole reach. However, a validation of the modelled inundation uncertainty using the calibration event showed a significant difference between the uncertainty map and the observed flood extent. While this was true for both measures, the difference was especially significant for the height-based one. This is likely to be due to the conceptually simple flood inundation model and the coarse application resolution employed in this case. The increased sensitivity of the height-based measure may lead to an increased onus being placed on the model developer in the production of a valid model
Resumo:
A new heuristic for the Steiner Minimal Tree problem is presented here. The method described is based on the detection of particular sets of nodes in networks, the “Hot Spot” sets, which are used to obtain better approximations of the optimal solutions. An algorithm is also proposed which is capable of improving the solutions obtained by classical heuristics, by means of a stirring process of the nodes in solution trees. Classical heuristics and an enumerative method are used CIS comparison terms in the experimental analysis which demonstrates the goodness of the heuristic discussed in this paper.
Resumo:
A new heuristic for the Steiner minimal tree problem is presented. The method described is based on the detection of particular sets of nodes in networks, the “hot spot” sets, which are used to obtain better approximations of the optimal solutions. An algorithm is also proposed which is capable of improving the solutions obtained by classical heuristics, by means of a stirring process of the nodes in solution trees. Classical heuristics and an enumerative method are used as comparison terms in the experimental analysis which demonstrates the capability of the heuristic discussed
Resumo:
We introduce a procedure for association based analysis of nuclear families that allows for dichotomous and more general measurements of phenotype and inclusion of covariate information. Standard generalized linear models are used to relate phenotype and its predictors. Our test procedure, based on the likelihood ratio, unifies the estimation of all parameters through the likelihood itself and yields maximum likelihood estimates of the genetic relative risk and interaction parameters. Our method has advantages in modelling the covariate and gene-covariate interaction terms over recently proposed conditional score tests that include covariate information via a two-stage modelling approach. We apply our method in a study of human systemic lupus erythematosus and the C-reactive protein that includes sex as a covariate.
Resumo:
DIGE is a protein labelling and separation technique allowing quantitative proteomics of two or more samples by optical fluorescence detection of differentially labelled proteins that are electrophoretically separated on the same gel. DIGE is an alternative to quantitation by MS-based methodologies and can circumvent their analytical limitations in areas such as intact protein analysis, (linear) detection over a wide range of protein abundances and, theoretically, applications where extreme sensitivity is needed. Thus, in quantitative proteomics DIGE is usually complementary to MS-based quantitation and has some distinct advantages. This review describes the basics of DIGE and its unique properties and compares it to MS-based methods in quantitative protein expression analysis.
Resumo:
Abstract. Different types of mental activity are utilised as an input in Brain-Computer Interface (BCI) systems. One such activity type is based on Event-Related Potentials (ERPs). The characteristics of ERPs are not visible in single-trials, thus averaging over a number of trials is necessary before the signals become usable. An improvement in ERP-based BCI operation and system usability could be obtained if the use of single-trial ERP data was possible. The method of Independent Component Analysis (ICA) can be utilised to separate single-trial recordings of ERP data into components that correspond to ERP characteristics, background electroencephalogram (EEG) activity and other components with non- cerebral origin. Choice of specific components and their use to reconstruct “denoised” single-trial data could improve the signal quality, thus allowing the successful use of single-trial data without the need for averaging. This paper assesses single-trial ERP signals reconstructed using a selection of estimated components from the application of ICA on the raw ERP data. Signal improvement is measured using Contrast-To-Noise measures. It was found that such analysis improves the signal quality in all single-trials.
Resumo:
Two different ways of performing low-energy electron diffraction (LEED) structure determinations for the p(2 x 2) structure of oxygen on Ni {111} are compared: a conventional LEED-IV structure analysis using integer and fractional-order IV-curves collected at normal incidence and an analysis using only integer-order IV-curves collected at three different angles of incidence. A clear discrimination between different adsorption sites can be achieved by the latter approach as well as the first and the best fit structures of both analyses are within each other's error bars (all less than 0.1 angstrom). The conventional analysis is more sensitive to the adsorbate coordinates and lateral parameters of the substrate atoms whereas the integer-order-based analysis is more sensitive to the vertical coordinates of substrate atoms. Adsorbate-related contributions to the intensities of integer-order diffraction spots are independent of the state of long-range order in the adsorbate layer. These results show, therefore, that for lattice-gas disordered adsorbate layers, for which only integer-order spots are observed, similar accuracy and reliability can be achieved as for ordered adsorbate layers, provided the data set is large enough.
Resumo:
This study proposes a utility-based framework for the determination of optimal hedge ratios (OHRs) that can allow for the impact of higher moments on hedging decisions. We examine the entire hyperbolic absolute risk aversion family of utilities which include quadratic, logarithmic, power, and exponential utility functions. We find that for both moderate and large spot (commodity) exposures, the performance of out-of-sample hedges constructed allowing for nonzero higher moments is better than the performance of the simpler OLS hedge ratio. The picture is, however, not uniform throughout our seven spot commodities as there is one instance (cotton) for which the modeling of higher moments decreases welfare out-of-sample relative to the simpler OLS. We support our empirical findings by a theoretical analysis of optimal hedging decisions and we uncover a novel link between OHRs and the minimax hedge ratio, that is the ratio which minimizes the largest loss of the hedged position. © 2011 Wiley Periodicals, Inc. Jrl Fut Mark
Resumo:
Risk and uncertainty are, to say the least, poorly considered by most individuals involved in real estate analysis - in both development and investment appraisal. Surveyors continue to express 'uncertainty' about the value (risk) of using relatively objective methods of analysis to account for these factors. These methods attempt to identify the risk elements more explicitly. Conventionally this is done by deriving probability distributions for the uncontrolled variables in the system. A suggested 'new' way of "being able to express our uncertainty or slight vagueness about some of the qualitative judgements and not entirely certain data required in the course of the problem..." uses the application of fuzzy logic. This paper discusses and demonstrates the terminology and methodology of fuzzy analysis. In particular it attempts a comparison of the procedures with those used in 'conventional' risk analysis approaches and critically investigates whether a fuzzy approach offers an alternative to the use of probability based analysis for dealing with aspects of risk and uncertainty in real estate analysis
Resumo:
We develop a new governance perspective on port–hinterland linkages and related port impacts. Many stakeholders in a port’s hinterland now demand tangible economic benefits from port activities, as a precondition for supporting port expansion and infrastructural investments. We use a governance lens to assess this farsighted contracting challenge. We find that most contemporary economic impact assessments of port investment projects pay scant attention to the contractual relationship challenges in port-hinterland relationships. In contrast, we focus explicitly on the spatial distribution of such impacts and the related contractual relationship issues facing port authorities or port users and their stakeholders in the port hinterland. We introduce a new concept, the Port Hinterland Impact (PHI) matrix, which focuses explicitly on the spatial distribution of port impacts and related contractual relationship challenges. The PHI matrix offers insight into port impacts using two dimensions: logistics dedicatedness, as an expression of Williamsonian asset specificity in the sphere of logistics contractual relationships, and geographic reach, with a longer reach typically reflecting the need for more complex contacting to overcome ‘distance’ challenges with external stakeholders. We use the PHI matrix in our empirical, governance-based analysis of contractual relationships between the port authorities in Antwerp and Zeebrugge, and their respective stakeholders.
Resumo:
This paper concerns the innovative use of a blend of systems thinking ideas in the ‘Munro Review of Child Protection’, a high-profile examination of child protection activities in England, conducted for the Department for Education. We go ‘behind the scenes’ to describe the OR methodologies and processes employed. The circumstances that led to the Review are outlined. Three specific contributions that systems thinking made to the Review are then described. First, the systems-based analysis and visualisation of how a ‘compliance culture’ had grown up. Second the creation of a large, complex systems map of current operations and the effects of past policies on them. Third, how the map gave shape to the range of issues the Review addressed and acted as an organising framework for the systemically coherent set of recommendations made. The paper closes with an outline of the main implementation steps taken so far to create a child protection system with the critically reflective properties of a learning organisation, and methodological reflections on the benefits of systems thinking to support organisational analysis.
Resumo:
This paper presents the two datasets (ARENA and P5) and the challenge that form a part of the PETS 2015 workshop. The datasets consist of scenarios recorded by us- ing multiple visual and thermal sensors. The scenarios in ARENA dataset involve different staged activities around a parked vehicle in a parking lot in UK and those in P5 dataset involve different staged activities around the perimeter of a nuclear power plant in Sweden. The scenarios of each dataset are grouped into ‘Normal’, ‘Warning’ and ‘Alarm’ categories. The Challenge specifically includes tasks that account for different steps in a video understanding system: Low-Level Video Analysis (object detection and tracking), Mid-Level Video Analysis (‘atomic’ event detection) and High-Level Video Analysis (‘complex’ event detection). The evaluation methodology used for the Challenge includes well-established measures.