998 resultados para Functions, Theta.


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Necessary and sufficient conditions for choice functions to be rational have been intensively studied in the past. However, in these attempts, a choice function is completely specified. That is, given any subset of options, called an issue, the best option over that issue is always known, whilst in real-world scenarios, it is very often that only a few choices are known instead of all. In this paper, we study partial choice functions and investigate necessary and sufficient rationality conditions for situations where only a few choices are known. We prove that our necessary and sufficient condition for partial choice functions boils down to the necessary and sufficient conditions for complete choice functions proposed in the literature. Choice functions have been instrumental in belief revision theory. That is, in most approaches to belief revision, the problem studied can simply be described as the choice of possible worlds compatible with the input information, given an agent’s prior belief state. The main effort has been to devise strategies in order to infer the agents revised belief state. Our study considers the converse problem: given a collection of input information items and their corresponding revision results (as provided by an agent), does there exist a rational revision operation used by the agent and a consistent belief state that may explain the observed results?

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cellular signal transduction in response to environmental signals involves a relay of precisely regulated signal amplifying and damping events. A prototypical signaling relay involves ligands binding to cell surface receptors and triggering the activation of downstream enzymes to ultimately affect the subcellular distribution and activity of DNA-binding proteins that regulate gene expression. These so-called signal transduction cascades have dominated our view of signaling for decades. More recently evidence has accumulated that components of these cascades can be multifunctional, in effect playing a conventional role for example as a cell surface receptor for a ligand whilst also having alternative functions for example as transcriptional regulators in the nucleus. This raises new challenges for researchers. What are the cues/triggers that determine which role such proteins play? What are the trafficking pathways which regulate the spatial distribution of such proteins so that they can perform nuclear functions and under what circumstances are these alternative functions most relevant?

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A subset of proteins predominantly associated with early endosomes or implicated in clathrin-mediated endocytosis can shuttle between the cytoplasm and the nucleus. Although the endocytic functions of these proteins have been extensively studied, much less effort has been expended in exploring their nuclear roles. Membrane trafficking proteins can affect signalling and proliferation and this can be achieved either at a nuclear or endocytic level. Furthermore, some proteins, such as Huntingtin interacting protein 1, are known as cancer biomarkers. This review will highlight the limits of our understanding of their nuclear functions and the relevance of this to signalling and oncogenesis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the seismic vulnerability of building structures is important for seismic engineers, building owners, risk insurers and governments. Seismic vulnerability defines a buildings predisposition to be damaged as a result of an earthquake of a given severity. There are two components to seismic risk; the seismic hazard and the exposure of the structural inventory to any given earthquake event. This paper demonstrates the development of fragility curves at different damage states using a detailed mechanical model of a moment resisting reinforced concrete structure typical of Southern Europe. The mechanical model consists of a complex three-dimensional finite element model of the reinforced concrete moment resisting frame structure and is used to define the damage states through pushover analysis. Fragility curves are also defined using the HAZUS macroseismic methodology and the Risk-UE macroseismic methodology. Comparison of the mechanically modelled and HAZUS fragility curve shows good agreement while the Risk-UE methodology shows reasonably poor agreement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A maioria das funções celulares, incluindo expressão de genes, crescimento e proliferação celulares, metabolismo, morfologia, motilidade, comunicação intercelular e apoptose, é regulada por interações proteína-proteína (IPP). A célula responde a uma variedade de estímulos, como tal a expressão de proteínas é um processo dinâmico e os complexos formados são constituídos transitoriamente mudando de acordo com o seu ciclo funcional, adicionalmente, muitas proteínas são expressas de uma forma dependente do tipo de célula. Em qualquer instante a célula pode conter cerca de centenas de milhares de IPPs binárias, e encontrar os companheiros de interação de uma proteína é um meio de inferir a sua função. Alterações em redes de IPP podem também fornecer informações acerca de mecanismos de doença. O método de identificação binário mais frequentemente usado é o sistema Dois Hibrido de Levedura, adaptado para rastreio em larga escala. Esta metodologia foi aqui usada para identificar os interactomas específicos de isoforma da Proteína Fosfatase 1 (PP1), em cérebro humano. A PP1 é uma proteína fosfatase de Ser/Thr envolvida numa grande variedade de vias e eventos celulares. É uma proteína conservada codificada por três genes, que originam as isoformas α, β, e γ, com a última a originar γ1 e γ2 por splicing alternativo. As diferentes isoformas da PP1 são reguladas pelos companheiros de interação – proteínas que interagem com a PP1 (PIPs). A natureza modular dos complexos da PP1, bem como a sua associação combinacional, gera um largo reportório de complexos reguladores e papéis em circuitos de sinalização celular. Os interactomas da PP1 específicos de isofoma, em cérebro, foram aqui descritos, com um total de 263 interações identificadas e integradas com os dados recolhidos de várias bases de dados de IPPs. Adicionalmente, duas PIPs foram selecionadas para uma caracterização mais aprofundada da interação: Taperina e Sinfilina-1A. A Taperina é uma proteína ainda pouco descrita, descoberta recentemente como sendo uma PIP. A sua interação com as diferentes isoformas da PP1 e localização celulares foram analisadas. Foi descoberto que a Taperina é clivada e que está presente no citoplasma, membrana e núcleo e que aumenta os níveis de PP1, em células HeLa. Na membrana ela co-localiza com a PP1 e a actina e uma forma mutada da Taperina, no motivo de ligação à PP1, está enriquecida no núcleo, juntamente com a actina. Mais, foi descoberto que a Taperina é expressa em testículo e localiza-se na região acrossómica da cabeça do espermatozoide, uma estrutura onde a PP1 e a actina estão também presentes. A Sinfilina-1A, uma isoforma da Sinfilina-1, é uma proteína com tendência para agregar e tóxica, envolvida na doença de Parkinson. Foi mostrado que a Sinfilina-1A liga às isoformas da PP1, por co-transformação em levedura, e que mutação do seu motivo de ligação à PP1 diminuiu significativamente a interação, num ensaio de overlay. Quando sobre-expressa em células Cos-7, a Sinfilina-1A formou corpos de inclusão onde a PP1 estava presente, no entanto a forma mutada da Sinfilina-1A também foi capaz de agregar, indicando que a formação de inclusões não foi dependente de ligação à PP1. Este trabalho dá uma nova perspetiva dos interactomas da PP1, incluindo a identificação de dezenas de companheiros de ligação específicos de isoforma, e enfatiza a importância das PIPs, não apenas na compreensão das funções celulares da PP1 mas também, como alvos de intervenção terapêutica.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main motivation for the work presented here began with previously conducted experiments with a programming concept at the time named "Macro". These experiments led to the conviction that it would be possible to build a system of engine control from scratch, which could eliminate many of the current problems of engine management systems in a direct and intrinsic way. It was also hoped that it would minimize the full range of software and hardware needed to make a final and fully functional system. Initially, this paper proposes to make a comprehensive survey of the state of the art in the specific area of software and corresponding hardware of automotive tools and automotive ECUs. Problems arising from such software will be identified, and it will be clear that practically all of these problems stem directly or indirectly from the fact that we continue to make comprehensive use of extremely long and complex "tool chains". Similarly, in the hardware, it will be argued that the problems stem from the extreme complexity and inter-dependency inside processor architectures. The conclusions are presented through an extensive list of "pitfalls" which will be thoroughly enumerated, identified and characterized. Solutions will also be proposed for the various current issues and for the implementation of these same solutions. All this final work will be part of a "proof-of-concept" system called "ECU2010". The central element of this system is the before mentioned "Macro" concept, which is an graphical block representing one of many operations required in a automotive system having arithmetic, logic, filtering, integration, multiplexing functions among others. The end result of the proposed work is a single tool, fully integrated, enabling the development and management of the entire system in one simple visual interface. Part of the presented result relies on a hardware platform fully adapted to the software, as well as enabling high flexibility and scalability in addition to using exactly the same technology for ECU, data logger and peripherals alike. Current systems rely on a mostly evolutionary path, only allowing online calibration of parameters, but never the online alteration of their own automotive functionality algorithms. By contrast, the system developed and described in this thesis had the advantage of following a "clean-slate" approach, whereby everything could be rethought globally. In the end, out of all the system characteristics, "LIVE-Prototyping" is the most relevant feature, allowing the adjustment of automotive algorithms (eg. Injection, ignition, lambda control, etc.) 100% online, keeping the engine constantly working, without ever having to stop or reboot to make such changes. This consequently eliminates any "turnaround delay" typically present in current automotive systems, thereby enhancing the efficiency and handling of such systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dependence clusters are (maximal) collections of mutually dependent source code entities according to some dependence relation. Their presence in software complicates many maintenance activities including testing, refactoring, and feature extraction. Despite several studies finding them common in production code, their formation, identification, and overall structure are not well understood, partly because of challenges in approximating true dependences between program entities. Previous research has considered two approximate dependence relations: a fine-grained statement-level relation using control and data dependences from a program’s System Dependence Graph and a coarser relation based on function-level controlflow reachability. In principal, the first is more expensive and more precise than the second. Using a collection of twenty programs, we present an empirical investigation of the clusters identified by these two approaches. In support of the analysis, we consider hybrid cluster types that works at the coarser function-level but is based on the higher-precision statement-level dependences. The three types of clusters are compared based on their slice sets using two clustering metrics. We also perform extensive analysis of the programs to identify linchpin functions – functions primarily responsible for holding a cluster together. Results include evidence that the less expensive, coarser approaches can often be used as e�ective proxies for the more expensive, finer-grained approaches. Finally, the linchpin analysis shows that linchpin functions can be e�ectively and automatically identified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In previous papers from the authors fuzzy model identification methods were discussed. The bacterial algorithm for extracting fuzzy rule base from a training set was presented. The Levenberg-Marquardt algorithm was also proposed for determining membership functions in fuzzy systems. In this paper the Levenberg-Marquardt technique is improved to optimise the membership functions in the fuzzy rules without Ruspini-partition. The class of membership functions investigated is the trapezoidal one as it is general enough and widely used. The method can be easily extended to arbitrary piecewise linear functions as well.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the Riemann boundary value problem , for analytic functions in the class of analytic functions represented by the Cauchy-type integrals with density in the spaces with variable exponent. We consider both the case when the coefficient is piecewise continuous and it may be of a more general nature, admitting its oscillation. The explicit formulas for solutions in the variable exponent setting are given. The related singular integral equations in the same setting are also investigated. As an application there is derived some extension of the Szegö-Helson theorem to the case of variable exponents.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tese de doutoramento, Matemática (Álgebra Lógica e Fundamentos), Universidade de Lisboa, Faculdade de Ciências, 2014

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tese de doutoramento, Ciências Biomédicas (Ciências Funcionais), Universidade de Lisboa, Faculdade de Medicina, 2014

Relevância:

20.00% 20.00%

Publicador:

Resumo:

What is the best luminance contrast weighting-function for image quality optimization? Traditionally measured contrast sensitivity functions (CSFs), have been often used as weighting-functions in image quality and difference metrics. Such weightings have been shown to result in increased sharpness and perceived quality of test images. We suggest contextual CSFs (cCSFs) and contextual discrimination functions (cVPFs) should provide bases for further improvement, since these are directly measured from pictorial scenes, modeling threshold and suprathreshold sensitivities within the context of complex masking information. Image quality assessment is understood to require detection and discrimination of masked signals, making contextual sensitivity and discrimination functions directly relevant. In this investigation, test images are weighted with a traditional CSF, cCSF, cVPF and a constant function. Controlled mutations of these functions are also applied as weighting-functions, seeking the optimal spatial frequency band weighting for quality optimization. Image quality, sharpness and naturalness are then assessed in two-alternative forced-choice psychophysical tests. We show that maximal quality for our test images, results from cCSFs and cVPFs, mutated to boost contrast in the higher visible frequencies.