999 resultados para simplified framework


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a filter-based algorithm for feature selection. The filter is based on the partitioning of the set of features into clusters. The number of clusters, and consequently the cardinality of the subset of selected features, is automatically estimated from data. The computational complexity of the proposed algorithm is also investigated. A variant of this filter that considers feature-class correlations is also proposed for classification problems. Empirical results involving ten datasets illustrate the performance of the developed algorithm, which in general has obtained competitive results in terms of classification accuracy when compared to state of the art algorithms that find clusters of features. We show that, if computational efficiency is an important issue, then the proposed filter May be preferred over their counterparts, thus becoming eligible to join a pool of feature selection algorithms to be used in practice. As an additional contribution of this work, a theoretical framework is used to formally analyze some properties of feature selection methods that rely on finding clusters of features. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The results of searches for supersymmetry by the CMS experiment are interpreted in the framework of simplified models. The results are based on data corresponding to an integrated luminosity of 4.73 to 4.98 fb-1. The data were collected at the LHC in proton-proton collisions at a center-of-mass energy of 7 TeV. This paper describes the method of interpretation and provides upper limits on the product of the production cross section and branching fraction as a function of new particle masses for a number of simplified models. These limits and the corresponding experimental acceptance calculations can be used to constrain other theoretical models and to compare different supersymmetry-inspired analyses. © 2013 CERN.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many studies have been developed to analyze the structural seismic behavior through the damage index concept. The evaluation of this index has been employed to quantify the safety of new and existing structures and, also, to establish a framework for seismic retrofitting decision making of structures. Most proposed models are based in a posterthquake evaluation in such a way they uncouple the structural response from the damage evaluation. In this paper, a generalization of the model by Flórez-López (1995) is proposed. The formulation employs irreversible thermodynamics and internal state variable theory applied to the study of beams and frames and it allows and explicit coupling between the degradation and the structural mechanical behavior. A damage index es defined in order to model elastoplasticity coupled with damage and fatigue damage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fault diagnosis has become an important component in intelligent systems, such as intelligent control systems and intelligent eLearning systems. Reiter's diagnosis theory, described by first-order sentences, has been attracting much attention in this field. However, descriptions and observations of most real-world situations are related to fuzziness because of the incompleteness and the uncertainty of knowledge, e. g., the fault diagnosis of student behaviors in the eLearning processes. In this paper, an extension of Reiter's consistency-based diagnosis methodology, Fuzzy Diagnosis, has been proposed, which is able to deal with incomplete or fuzzy knowledge. A number of important properties of the Fuzzy diagnoses schemes have also been established. The computing of fuzzy diagnoses is mapped to solving a system of inequalities. Some special cases, abstracted from real-world situations, have been discussed. In particular, the fuzzy diagnosis problem, in which fuzzy observations are represented by clause-style fuzzy theories, has been presented and its solving method has also been given. A student fault diagnostic problem abstracted from a simplified real-world eLearning case is described to demonstrate the application of our diagnostic framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tanulmányunk célja a versenyképesség közösségi beágyazottságának elemzéséhez alkalmas elemzési keretek bemutatása és a versenyképesség fogalmának elemzése a fogalom intézményi, normatív tartalma szempontjából. Célunk a közösségi versenyképesség fogalmának és az ezt elemezni képes megközelítés kidolgozása. A feladat kettős: 1. A versenyképesség értelmezése a döntések közösségi keretei szempontjából 2. A közösségi keretek versenyképességre gyakorolt hatásának elemzése Ennek érdekében a tanulmány első fejezetében az egyéni döntést meghatározó tényezőket és az egyéni döntések jövőbeli interakciók környezetére gyakorolt hatását elemző keretet vázoljuk fel. Megközelítésünk szerint az egyéni döntést négy tényező határozza meg. a közösségi környezet, a természeti környezet, s személyes jellemzők és az interakciós partnerek. Az ez alapján születő döntések formálják a jövőbeli döntési környezetet. A döntések hatásának elemzéséhez a környezetre gyakorolt hatások értékelését orientáló fogalomra van szükség. Elemzésünk esetében ez a fogalom az értékteremtés, amit a következőképp határozunk meg: az értékteremtő tevékenységek során valaki arra törekszik, hogy saját személyes céljait a másokkal való kölcsönösen előnyös együttműködések lehetőségeinek bővítésével, hozamainak növelésével szolgálja. A második fejezetben az egyéni döntések közösségi kereteit és az egyéni döntések közösségre gyakorolt hatását elemezzük részletesen. A formális és informális intézmények világát, a közösségi magatartásokat szabályozó normák és konvenciók rendszerét a következő öt – a valóságban gyakran keveredő - alapelemre bontjuk értékrend, konvenció, közösségi szabály, hivatalos előírás, egyének közötti megállapodás. Ezek közül a magánszereplők együttműködésének az érintett szereplők által módosítható intézményi elemeinek (konvenció, megállapodás) alkalmazkodása a leggyorsabb, a közösség egészét irányító formális intézmények a status quo iránti elfogultságuk miatt lomhábbak, míg a közösség életét informálisan befolyásoló normák a legstabilabb intézményi elemek. A közösségek változása általában lassú, legtöbbször nem szándékolt hatások következménye. Mindezek mellett a közösségi intézmények tudatos alakításában komoly szerepe van (1) a konvenciókat megújító intézményi innovátoroknak, (2) a szerződéses formulákon módosító vállalkozóknak és (3) a hivatalos előírások formálásába bekapcsolódó politikai szereplőknek politikai vállalkozóként, tisztviselőként, vagy közéleti résztvevőként. A harmadik fejezetben a versenyképesség fogalmát elemezzük, és ez alapján határozzuk meg a közösségi versenyképesség fogalmát. Megvizsgáljuk, milyen feltevésekkel él a fogalom a közösségi környezettel kapcsolatban, illetve milyen normatív elemei vannak a definíciónak. A vizsgálathoz a versenyképesség fogalmának egy lecsupaszított változatát használtuk. E szerint a versenyképesség valaki képessége értékteremtő módon bekapcsolódni a gazdasági munkamegosztásba úgy, hogy tevékenysége relatív hozama nem csökken. Az elemzés alapján a versenyképesség a közösségi környezet következő hét elemére épül: 1. A közösség tagjainak és a tagság tartalmának meghatározottsága; 2. a potenciális együttműködő felek közös múltja, jövője, konvenció- és normarendszere; 3. A gazdasági együttműködés intézményeinek (csere, vállalkozás, tulajdon, szerződés) működőképessége; 4. Az értékteremtés normatív koncepciója és az arra épülő részleteiben meghatározott, és részleteiben is közösségi legitimációval bíró szabályrendszer; 5. Az innovációt támogató és a kellően rugalmas értékrend és közösségi szabályok. 6. A gazdasági munkamegosztás igényeihez részleteiben és változásával is igazodó konvenciók, hivatalos előírások és szerződések; 7. A közösségi környezet tudatos alakításával foglalkozó szereplők (közösségi innovátorok, vállalkozók és politikai szereplők) motivációja és lehetősége a hozamok relatív szintjének tartását támogató intézményi környezet karbantartásában. A versenyképesség fogalmának intézményi elemzése rámutat, hogy a fogalom gazdag értéktartalommal és határozott közösségi intézményrendszer-képpel rendelkezik. A közösségi versenyképesség ez alapján a versenyképesség fogalmába kódolt közösségi környezetként határozható meg. A kutatás következő lépése a közösségi versenyképesség meghatározása, az azt befolyásoló mechanizmusok feltárása és javítását támogató elemzési eszközök, gyakorlati segédletek kidolgozása. Ezen feladatok előkészítése érdekében a tanulmány mellékletében két történelmi esettanulmányt mutatunk be, röviden áttekintjük a téma szempontjából releváns irodalom főbb eredményeit és bemutatunk egy praktikus alkalmazásra szánt normatív elemzési eszközt, mellyel az elemezhető, hogy az állami lépések mennyire bátorítják az értékteremtő vállalkozást. _________ This paper (1) introduces an analytical framework to study the impact of the community on competitiveness and (2) analyses the institutional and normative content in the concept of competitiveness. The goal is to elaborate an approach that supports the definition and analysis of the ‘competiveness of community’. This task has two main parts: 1. interpretation of competiveness from social choice perspective 2. assessing the impact of social settings on the competiveness of a community The first chapter of the study draws up an analytical framework to study the social factors of individual decisions and their impact on the environment of future interactions. We focus on four main factors that shape setting of future interactions: social environment, natural environment, personal characteristics and partners in interactions. We use the concept of value creation to assess the impact of individual decisions on these factors. The second chapter discusses the social factors of individual decisions and the impact of individual decisions on the community. Institutions are conceptualized as value systems, conventions, community rules, official rules and contracts in the study. The conventions and contracts can accommodate to the changes of environment more smoothly, formal institutions are less flexible due to their bias toward status quo. Informal rules and value systems resists change more frequently. The formation of social environment is usually slow and based on unintended effects. Altogether (1) innovators who revise social conventions, (2) entrepreneurs who reshape contracts and (4) political entrepreneurs who formulate formal rules have influential roles on the institutional setting. The third chapter discusses the social assumptions included into the definition of competitiveness and we give a definition for the competiveness of communities. A simplified definition of competitiveness is used for this analysis: competiveness is someone’s ability and motivation to participate in the economic division of labor in a way that is based on value creation and maintains the relative return of activities. Our analysis reveals that competitiveness assumes the following features of the community: 1. Defined membership of community: who are the members and what does membership mean. 2. Common past, future, convention and norm system of the potential participants of interactions 3. Functionality of institutions that facilitate division of labor (exchange, entrepreneurship, property, contract) 4. Existing normative concept on value creation and social accepted rules that govern interactions 5. Value system and rules that promote innovation 6. Conventions, official norms and contracts that fits to economic division of labor in a detailed and dynamic way 7. Motivation and potential of actors who shape social environment consciously to maintain institutions in order to sustain the relative return of economic activities This analysis shows that the concept of competitiveness assumes well established values and detailed expectations on institutional settings. Followingly, competiveness of community can be defined with these criteria of social environment. Two historical case studies and the draft of a policy oriented toolkit demonstrate the applicability of the introduced approach in the appendix. The core findings of the literature are also reviewed there.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advances in airborne Light Detection and Ranging (LIDAR) technology allow rapid and inexpensive measurements of topography over large areas. Airborne LIDAR systems usually return a 3-dimensional cloud of point measurements from reflective objects scanned by the laser beneath the flight path. This technology is becoming a primary method for extracting information of different kinds of geometrical objects, such as high-resolution digital terrain models (DTMs), buildings and trees, etc. In the past decade, LIDAR gets more and more interest from researchers in the field of remote sensing and GIS. Compared to the traditional data sources, such as aerial photography and satellite images, LIDAR measurements are not influenced by sun shadow and relief displacement. However, voluminous data pose a new challenge for automated extraction the geometrical information from LIDAR measurements because many raster image processing techniques cannot be directly applied to irregularly spaced LIDAR points. ^ In this dissertation, a framework is proposed to filter out information about different kinds of geometrical objects, such as terrain and buildings from LIDAR automatically. They are essential to numerous applications such as flood modeling, landslide prediction and hurricane animation. The framework consists of several intuitive algorithms. Firstly, a progressive morphological filter was developed to detect non-ground LIDAR measurements. By gradually increasing the window size and elevation difference threshold of the filter, the measurements of vehicles, vegetation, and buildings are removed, while ground data are preserved. Then, building measurements are identified from no-ground measurements using a region growing algorithm based on the plane-fitting technique. Raw footprints for segmented building measurements are derived by connecting boundary points and are further simplified and adjusted by several proposed operations to remove noise, which is caused by irregularly spaced LIDAR measurements. To reconstruct 3D building models, the raw 2D topology of each building is first extracted and then further adjusted. Since the adjusting operations for simple building models do not work well on 2D topology, 2D snake algorithm is proposed to adjust 2D topology. The 2D snake algorithm consists of newly defined energy functions for topology adjusting and a linear algorithm to find the minimal energy value of 2D snake problems. Data sets from urbanized areas including large institutional, commercial, and small residential buildings were employed to test the proposed framework. The results demonstrated that the proposed framework achieves a very good performance. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the last decade, wind power generation has seen rapid development. According to the U.S. Department of Energy, achieving 20\% wind power penetration in the U.S. by 2030 will require: (i) enhancement of the transmission infrastructure, (ii) improvement of reliability and operability of wind systems and (iii) increased U.S. manufacturing capacity of wind generation equipment. This research will concentrate on improvement of reliability and operability of wind energy conversion systems (WECSs). The increased penetration of wind energy into the grid imposes new operating conditions on power systems. This change requires development of an adequate reliability framework. This thesis proposes a framework for assessing WECS reliability in the face of external disturbances, e.g., grid faults and internal component faults. The framework is illustrated using a detailed model of type C WECS - doubly fed induction generator with corresponding deterministic and random variables in a simplified grid model. Fault parameters and performance requirements essential to reliability measurements are included in the simulation. The proposed framework allows a quantitative analysis of WECS designs; analysis of WECS control schemes, e.g., fault ride-through mechanisms; discovery of key parameters that influence overall WECS reliability; and computation of WECS reliability with respect to different grid codes/performance requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Common computational principles underlie processing of various visual features in the cortex. They are considered to create similar patterns of contextual modulations in behavioral studies for different features as orientation and direction of motion. Here, I studied the possibility that a single theoretical framework, implemented in different visual areas, of circular feature coding and processing could explain these similarities in observations. Stimuli were created that allowed direct comparison of the contextual effects on orientation and motion direction with two different psychophysical probes: changes in weak and strong signal perception. One unique simplified theoretical model of circular feature coding including only inhibitory interactions, and decoding through standard vector average, successfully predicted the similarities in the two domains, while different feature population characteristics explained well the differences in modulation on both experimental probes. These results demonstrate how a single computational principle underlies processing of various features across the cortices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single-cell functional proteomics assays can connect genomic information to biological function through quantitative and multiplex protein measurements. Tools for single-cell proteomics have developed rapidly over the past 5 years and are providing unique opportunities. This thesis describes an emerging microfluidics-based toolkit for single cell functional proteomics, focusing on the development of the single cell barcode chips (SCBCs) with applications in fundamental and translational cancer research.

The microchip designed to simultaneously quantify a panel of secreted, cytoplasmic and membrane proteins from single cells will be discussed at the beginning, which is the prototype for subsequent proteomic microchips with more sophisticated design in preclinical cancer research or clinical applications. The SCBCs are a highly versatile and information rich tool for single-cell functional proteomics. They are based upon isolating individual cells, or defined number of cells, within microchambers, each of which is equipped with a large antibody microarray (the barcode), with between a few hundred to ten thousand microchambers included within a single microchip. Functional proteomics assays at single-cell resolution yield unique pieces of information that significantly shape the way of thinking on cancer research. An in-depth discussion about analysis and interpretation of the unique information such as functional protein fluctuations and protein-protein correlative interactions will follow.

The SCBC is a powerful tool to resolve the functional heterogeneity of cancer cells. It has the capacity to extract a comprehensive picture of the signal transduction network from single tumor cells and thus provides insight into the effect of targeted therapies on protein signaling networks. We will demonstrate this point through applying the SCBCs to investigate three isogenic cell lines of glioblastoma multiforme (GBM).

The cancer cell population is highly heterogeneous with high-amplitude fluctuation at the single cell level, which in turn grants the robustness of the entire population. The concept that a stable population existing in the presence of random fluctuations is reminiscent of many physical systems that are successfully understood using statistical physics. Thus, tools derived from that field can probably be applied to using fluctuations to determine the nature of signaling networks. In the second part of the thesis, we will focus on such a case to use thermodynamics-motivated principles to understand cancer cell hypoxia, where single cell proteomics assays coupled with a quantitative version of Le Chatelier's principle derived from statistical mechanics yield detailed and surprising predictions, which were found to be correct in both cell line and primary tumor model.

The third part of the thesis demonstrates the application of this technology in the preclinical cancer research to study the GBM cancer cell resistance to molecular targeted therapy. Physical approaches to anticipate therapy resistance and to identify effective therapy combinations will be discussed in detail. Our approach is based upon elucidating the signaling coordination within the phosphoprotein signaling pathways that are hyperactivated in human GBMs, and interrogating how that coordination responds to the perturbation of targeted inhibitor. Strongly coupled protein-protein interactions constitute most signaling cascades. A physical analogy of such a system is the strongly coupled atom-atom interactions in a crystal lattice. Similar to decomposing the atomic interactions into a series of independent normal vibrational modes, a simplified picture of signaling network coordination can also be achieved by diagonalizing protein-protein correlation or covariance matrices to decompose the pairwise correlative interactions into a set of distinct linear combinations of signaling proteins (i.e. independent signaling modes). By doing so, two independent signaling modes – one associated with mTOR signaling and a second associated with ERK/Src signaling have been resolved, which in turn allow us to anticipate resistance, and to design combination therapies that are effective, as well as identify those therapies and therapy combinations that will be ineffective. We validated our predictions in mouse tumor models and all predictions were borne out.

In the last part, some preliminary results about the clinical translation of single-cell proteomics chips will be presented. The successful demonstration of our work on human-derived xenografts provides the rationale to extend our current work into the clinic. It will enable us to interrogate GBM tumor samples in a way that could potentially yield a straightforward, rapid interpretation so that we can give therapeutic guidance to the attending physicians within a clinical relevant time scale. The technical challenges of the clinical translation will be presented and our solutions to address the challenges will be discussed as well. A clinical case study will then follow, where some preliminary data collected from a pediatric GBM patient bearing an EGFR amplified tumor will be presented to demonstrate the general protocol and the workflow of the proposed clinical studies.