19 resultados para Top down approaches
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Background: A common task in analyzing microarray data is to determine which genes are differentially expressed across two (or more) kind of tissue samples or samples submitted under experimental conditions. Several statistical methods have been proposed to accomplish this goal, generally based on measures of distance between classes. It is well known that biological samples are heterogeneous because of factors such as molecular subtypes or genetic background that are often unknown to the experimenter. For instance, in experiments which involve molecular classification of tumors it is important to identify significant subtypes of cancer. Bimodal or multimodal distributions often reflect the presence of subsamples mixtures. Consequently, there can be genes differentially expressed on sample subgroups which are missed if usual statistical approaches are used. In this paper we propose a new graphical tool which not only identifies genes with up and down regulations, but also genes with differential expression in different subclasses, that are usually missed if current statistical methods are used. This tool is based on two measures of distance between samples, namely the overlapping coefficient (OVL) between two densities and the area under the receiver operating characteristic (ROC) curve. The methodology proposed here was implemented in the open-source R software. Results: This method was applied to a publicly available dataset, as well as to a simulated dataset. We compared our results with the ones obtained using some of the standard methods for detecting differentially expressed genes, namely Welch t-statistic, fold change (FC), rank products (RP), average difference (AD), weighted average difference (WAD), moderated t-statistic (modT), intensity-based moderated t-statistic (ibmT), significance analysis of microarrays (samT) and area under the ROC curve (AUC). On both datasets all differentially expressed genes with bimodal or multimodal distributions were not selected by all standard selection procedures. We also compared our results with (i) area between ROC curve and rising area (ABCR) and (ii) the test for not proper ROC curves (TNRC). We found our methodology more comprehensive, because it detects both bimodal and multimodal distributions and different variances can be considered on both samples. Another advantage of our method is that we can analyze graphically the behavior of different kinds of differentially expressed genes. Conclusion: Our results indicate that the arrow plot represents a new flexible and useful tool for the analysis of gene expression profiles from microarrays.
Resumo:
We study the effect that flavor-changing neutral current interactions of the top quark will have on the branching ratio of charged decays of the top quark. We have performed an integrated analysis using Tevatron and B-factories data and with just the further assumption that the Cabibbo-Kobayashi-Maskawa matrix is unitary, we can obtain very restrictive bounds on the strong and electroweak flavor-changing neutral current branching ratios Br(t -> qX)< 4.0x10(-4), where X is any vector boson and a sum in q=u, c is implied.
Resumo:
A large area colour imager optically addressed is presented. The colour imager consists of a thin wide band gap p-i-n a-SiC:H filtering element deposited on the top of a thick large area a-SiC:H(-p)/a-Si:H(-i)/a-SiC:H(-n) image sensor, which reveals itself an intrinsic colour filter. In order to tune the external applied voltage for full colour discrimination the photocurrent generated by a modulated red light is measured under different optical and electrical bias. Results reveal that the integrated device behaves itself as an imager and a filter giving information not only on the position where the optical image is absorbed but also on it wavelength and intensity. The amplitude and sign of the image signals are electrically tuneable. In a wide range of incident fluxes and under reverse bias, the red and blue image signals are opposite in sign and the green signal is suppressed allowing blue and red colour recognition. The green information is obtained under forward bias, where the blue signal goes down to zero and the red and green remain constant. Combining the information obtained at this two applied voltages a RGB colour image picture can be acquired without the need of the usual colour filters or pixel architecture. A numerical simulation supports the colour filter analysis.
Resumo:
Although stock prices fluctuate, the variations are relatively small and are frequently assumed to be normal distributed on a large time scale. But sometimes these fluctuations can become determinant, especially when unforeseen large drops in asset prices are observed that could result in huge losses or even in market crashes. The evidence shows that these events happen far more often than would be expected under the generalized assumption of normal distributed financial returns. Thus it is crucial to properly model the distribution tails so as to be able to predict the frequency and magnitude of extreme stock price returns. In this paper we follow the approach suggested by McNeil and Frey (2000) and combine the GARCH-type models with the Extreme Value Theory (EVT) to estimate the tails of three financial index returns DJI,FTSE 100 and NIKKEI 225 representing three important financial areas in the world. Our results indicate that EVT-based conditional quantile estimates are much more accurate than those from conventional AR-GARCH models assuming normal or Student’s t-distribution innovations when doing out-of-sample estimation (within the insample estimation, this is so for the right tail of the distribution of returns).
Resumo:
Preliminary version
Resumo:
Os Caminhos-de-ferro representam um conjunto de abordagens quase ilimitadas, nestes termos, o tema proposto – “A optimização de recursos na construção de linhas de Caminhos de Ferro”, incidirá particularmente sobre a optimização dos recursos: i) materiais; ii) mão-de-obra; iii) equipamentos, afectos a construção da via e da catenária. O presente estudo pretende traçar um encadeamento lógico e intuitivo que permita manter um fio condutor ao longo do todo o seu desenvolvimento, razão pela qual, a sequência dos objectivos apresentados constitui um caminho que permitira abrir sucessivas janelas de conhecimento. O conhecimento da via e da catenária, a compreensão da forma como os trabalhos interagem com os factores externos e a experiência na utilização das ferramentas de planeamento e gestão, são qualidades que conduzem certamente a bons resultados quando nos referimos a necessidade de optimizar os recursos na construção da via e da catenária. A transmissão e reciprocidade da informação, entre as fases de elaboração de propostas e de execução da obra, representam um recurso que pode conduzir a ganhos de produtividade. A coordenação e outro factor determinante na concretização dos objectivos de optimização dos recursos, que se efectua, quer internamente, quer exteriormente. A optimização de recursos na construção da via e da catenária representa o desafio permanente das empresas de construção do sector ferroviário. E neste pressuposto que investem na formação e especialização da sua mão-de-obra e na renovação tecnológica dos seus equipamentos. A optimização dos materiais requer aproximações distintas para o caso da via e para o caso da catenária, assim como, os equipamentos e a mão-de-obra não podem ser desligados, pois não funcionam autonomamente, no entanto a respectiva optimização obedece a pressupostos diferentes.
Resumo:
We investigate the crust, upper mantle and mantle transition zone of the Cape Verde hotspot by using seismic P and S receiver functions from several tens of local seismograph stations. We find a strong discontinuity at a depth of similar to 10 km underlain by a similar to 15-km thick layer with a high (similar to 1.9) Vp/Vs velocity ratio. We interpret this discontinuity and the underlying layer as the fossil Moho, inherited from the pre-hotspot era, and the plume-related magmatic underplate. Our uppermost-mantle models are very different from those previously obtained for this region: our S velocity is much lower and there are no indications of low densities. Contrary to previously published arguments for the standard transition zone thickness our data indicate that this thickness under the Cape Verde islands is up to similar to 30 km less than in the ambient mantle. This reduction is a combined effect of a depression of the 410-km discontinuity and an uplift of the 660-km discontinuity. The uplift is in contrast to laboratory data and some seismic data on a negligible dependence of depth of the 660-km discontinuity on temperature in hotspots. A large negative pressure-temperature slope which is suggested by our data implies that the 660-km discontinuity may resist passage of the plume. Our data reveal beneath the islands a reduction of S velocity of a few percent between 470-km and 510-km depths. The low velocity layer in the upper transition zone under the Cape Verde archipelago is very similar to that previously found under the Azores and a few other hotspots. In the literature there are reports on a regional 520-km discontinuity, the impedance of which is too large to be explained by the known phase transitions. Our observations suggest that the 520-km discontinuity may present the base of the low-velocity layer in the transition zone. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Dissertação apresentada à Escola Superior de Educação de Lisboa para a obtenção do Grau de Mestre em Ciências da Educação - Especialidade Educação especial
Resumo:
IBD is a gastro-intestinal disorder marked with chronic inflammation of intestinal epithelium, damaging mucosal tissue and manifests into several intestinal and extra-intestinal symptoms. Currently used medical therapy is able to induce and maintain the patient in remission, however no modifies or reverses the underlying pathogenic mechanism. The research of other medical approaches is crucial to the treatment of IBD and, for this, it´s important to use animal models to mimic the characteristics of disease in real life. The aim of the study is to develop an animal model of TNBS-induced colitis to test new pharmacological approaches. TNBS was instilled intracolonic single dose as described by Morris et al. It was administered 2,5% TNBS in 50% ethanol through a catheter carefully inserted into the colon. Mice were kept in a Tredelenburg position to avoid reflux. On day 4 and 7, the animals were sacrificed by cervical dislocation. The induction was confirmed based on clinical symptoms/signs, ALP determination and histopathological analysis. At day 4, TNBS group presented a decreased body weight and an alteration of intestinal motility characterized by diarrhea, severe edema of the anus and moderate morbidity, while in the two control groups weren’t identified any alteration on the clinical symptoms/signs with an increase of the body weight. TNBS group presented the highest concentrations of ALP comparing with control groups. The histopathology analysis revealed severe necrosis of the mucosa with widespread necrosis of the intestinal glands. Severe hemorrhagic and purulent exsudates were observed in the submucosa, muscular and serosa. TNBS group presented clinical symptoms/signs and histopathological features compatible with a correct induction of UC. The peak of manifestations became maximal at day 4 after induction. This study allows concluding that it’s possible to develop a TNBS induced colitis 4 days after instillation.
Resumo:
Relatório Final de Estágio apresentado à Escola Superior de Dança, com vista à obtenção do grau de Mestre em Ensino de Dança.
Resumo:
Discrete data representations are necessary, or at least convenient, in many machine learning problems. While feature selection (FS) techniques aim at finding relevant subsets of features, the goal of feature discretization (FD) is to find concise (quantized) data representations, adequate for the learning task at hand. In this paper, we propose two incremental methods for FD. The first method belongs to the filter family, in which the quality of the discretization is assessed by a (supervised or unsupervised) relevance criterion. The second method is a wrapper, where discretized features are assessed using a classifier. Both methods can be coupled with any static (unsupervised or supervised) discretization procedure and can be used to perform FS as pre-processing or post-processing stages. The proposed methods attain efficient representations suitable for binary and multi-class problems with different types of data, being competitive with existing methods. Moreover, using well-known FS methods with the features discretized by our techniques leads to better accuracy than with the features discretized by other methods or with the original features. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
This paper discusses the technology of smart floors as a enabler of smart cities. The discussion will be based on technology that is embedded into the environment that enable location, navigation but also wireless power transmission for powering up elements siting on it, typically mobile devices. One of those examples is the smart floor, this implementation follows two paths, one where the floor is passive, and normally passive RFID's are embedded into the floor, they are used to provide intelligence into the surrounding space, this is normally complemented with a battery powered mobile unit that scans the floor for the sensors and communicates the information to a database which locates the mobile device in the environment. The other path for the smart city enabler is where the floor is active and delivers energy for the objects standing on top of it. In this paper these two approaches will be presented, by discussing the technology behind it. © 2014 IEEE.
Resumo:
We consider the two-Higgs-doublet model as a framework in which to evaluate the viability of scenarios in which the sign of the coupling of the observed Higgs boson to down-type fermions (in particular, b-quark pairs) is opposite to that of the Standard Model (SM), while at the same time all other tree-level couplings are close to the SM values. We show that, whereas such a scenario is consistent with current LHC observations, both future running at the LHC and a future e(+)e(-) linear collider could determine the sign of the Higgs coupling to b-quark pairs. Discrimination is possible for two reasons. First, the interference between the b-quark and the t-quark loop contributions to the ggh coupling changes sign. Second, the charged-Higgs loop contribution to the gamma gamma h coupling is large and fairly constant up to the largest charged-Higgs mass allowed by tree-level unitarity bounds when the b-quark Yukawa coupling has the opposite sign from that of the SM (the change in sign of the interference terms between the b-quark loop and the W and t loops having negligible impact).
Resumo:
A Síndrome de Brown (SB) é uma síndrome anatómica e restritiva. Segundo Brown (1942), esta síndrome é classificada como uma ausência de elevação em adução, uma síndrome restritiva do grande oblíquo. Pode ser congénita, adquirida e iatrogénica e divide-se como ligeira, severa ou profunda. O Pequeno Oblíquo (PO) tem como acções a exciclodução, elevação e abdução. Ao estar afectado, observa-se nos movimentos oculares limitação da elevação e adução, que induz uma paralisia deste músculo. Assim, as duas apresentam ausência de elevação em adução, sendo importante, a realização de diagnóstico diferencial entre elas. Objectivos: Identificar as diversas características motoras e sensoriais da paralisia do PO comparando-as com as características da SB, especificando os testes de ortóptica adequados. Apresentar o diagnóstico diferencial a realizar, explicitando as características principais dos exames coordimétricos e do teste das ducções forçadas.
Resumo:
We show that a light charged Higgs boson signal via tau(+/-)nu decay can be established at the Large Hadron Collider (LHC) also in the case of single top production. This process complements searches for the same signal in the case of charged Higgs bosons emerging from t (t) over bar production. The models accessible include the Minimal Supersymmetric Standard Model (MSSM) as well a variety of 2-Higgs Doublet Models (2HDMs). High energies and luminosities are however required, thereby restricting interest on this mode to the case of the LHC running at 14TeV with design configuration.