986 resultados para Notion of code


Relevância:

90.00% 90.00%

Publicador:

Resumo:

With the ever growing trend of smart phones and tablets, Android is becoming more and more popular everyday. With more than one billion active users i to date, Android is the leading technology in smart phone arena. In addition to that, Android also runs on Android TV, Android smart watches and cars. Therefore, in recent years, Android applications have become one of the major development sectors in software industry. As of mid 2013, the number of published applications on Google Play had exceeded one million and the cumulative number of downloads was more than 50 billionii. A 2013 survey also revealed that 71% of the mobile application developers work on developing Android applicationsiii. Considering this size of Android applications, it is quite evident that people rely on these applications on a daily basis for the completion of simple tasks like keeping track of weather to rather complex tasks like managing one’s bank accounts. Hence, like every other kind of code, Android code also needs to be verified in order to work properly and achieve a certain confidence level. Because of the gigantic size of the number of applications, it becomes really hard to manually test Android applications specially when it has to be verified for various versions of the OS and also, various device configurations such as different screen sizes and different hardware availability. Hence, recently there has been a lot of work on developing different testing methods for Android applications in Computer Science fraternity. The model of Android attracts researchers because of its open source nature. It makes the whole research model more streamlined when the code for both, application and the platform are readily available to analyze. And hence, there has been a great deal of research in testing and static analysis of Android applications. A great deal of this research has been focused on the input test generation for Android applications. Hence, there are a several testing tools available now, which focus on automatic generation of test cases for Android applications. These tools differ with one another on the basis of their strategies and heuristics used for this generation of test cases. But there is still very little work done on the comparison of these testing tools and the strategies they use. Recently, some research work has been carried outiv in this regard that compared the performance of various available tools with respect to their respective code coverage, fault detection, ability to work on multiple platforms and their ease of use. It was done, by running these tools on a total of 60 real world Android applications. The results of this research showed that although effective, these strategies being used by the tools, also face limitations and hence, have room for improvement. The purpose of this thesis is to extend this research into a more specific and attribute-­‐ oriented way. Attributes refer to the tasks that can be completed using the Android platform. It can be anything ranging from a basic system call for receiving an SMS to more complex tasks like sending the user to another application from the current one. The idea is to develop a benchmark for Android testing tools, which is based on the performance related to these attributes. This will allow the comparison of these tools with respect to these attributes. For example, if there is an application that plays some audio file, will the testing tool be able to generate a test input that will warrant the execution of this audio file? Using multiple applications using different attributes, it can be visualized that which testing tool is more useful for which kinds of attributes. In this thesis, it was decided that 9 attributes covering the basic nature of tasks, will be targeted for the assessment of three testing tools. Later this can be done for much more attributes to compare even more testing tools. The aim of this work is to show that this approach is effective and can be used on a much larger scale. One of the flagship features of this work, which also differentiates it with the previous work, is that the applications used, are all specially made for this research. The reason for doing that is to analyze just that specific attribute in isolation, which the application is focused on, and not allow the tool to get bottlenecked by something trivial, which is not the main attribute under testing. This means 9 applications, each focused on one specific attribute. The main contributions of this thesis are: A summary of the three existing testing tools and their respective techniques for automatic test input generation of Android Applications. • A detailed study of the usage of these testing tools using the 9 applications specially designed and developed for this study. • The analysis of the obtained results of the study carried out. And a comparison of the performance of the selected tools.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we define the notion of an axiom dependency hypergraph, which explicitly represents how axioms are included into a module by the algorithm for computing locality-based modules. A locality-based module of an ontology corresponds to a set of connected nodes in the hypergraph, and atoms of an ontology to strongly connected components. Collapsing the strongly connected components into single nodes yields a condensed hypergraph that comprises a representation of the atomic decomposition of the ontology. To speed up the condensation of the hypergraph, we first reduce its size by collapsing the strongly connected components of its graph fragment employing a linear time graph algorithm. This approach helps to significantly reduce the time needed for computing the atomic decomposition of an ontology. We provide an experimental evaluation for computing the atomic decomposition of large biomedical ontologies. We also demonstrate a significant improvement in the time needed to extract locality-based modules from an axiom dependency hypergraph and its condensed version.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Los hipergrafos dirigidos se han empleado en problemas relacionados con lógica proposicional, bases de datos relacionales, linguística computacional y aprendizaje automático. Los hipergrafos dirigidos han sido también utilizados como alternativa a los grafos (bipartitos) dirigidos para facilitar el estudio de las interacciones entre componentes de sistemas complejos que no pueden ser fácilmente modelados usando exclusivamente relaciones binarias. En este contexto, este tipo de representación es conocida como hiper-redes. Un hipergrafo dirigido es una generalización de un grafo dirigido especialmente adecuado para la representación de relaciones de muchos a muchos. Mientras que una arista en un grafo dirigido define una relación entre dos de sus nodos, una hiperarista en un hipergrafo dirigido define una relación entre dos conjuntos de sus nodos. La conexión fuerte es una relación de equivalencia que divide el conjunto de nodos de un hipergrafo dirigido en particiones y cada partición define una clase de equivalencia conocida como componente fuertemente conexo. El estudio de los componentes fuertemente conexos de un hipergrafo dirigido puede ayudar a conseguir una mejor comprensión de la estructura de este tipo de hipergrafos cuando su tamaño es considerable. En el caso de grafo dirigidos, existen algoritmos muy eficientes para el cálculo de los componentes fuertemente conexos en grafos de gran tamaño. Gracias a estos algoritmos, se ha podido averiguar que la estructura de la WWW tiene forma de “pajarita”, donde más del 70% del los nodos están distribuidos en tres grandes conjuntos y uno de ellos es un componente fuertemente conexo. Este tipo de estructura ha sido también observada en redes complejas en otras áreas como la biología. Estudios de naturaleza similar no han podido ser realizados en hipergrafos dirigidos porque no existe algoritmos capaces de calcular los componentes fuertemente conexos de este tipo de hipergrafos. En esta tesis doctoral, hemos investigado como calcular los componentes fuertemente conexos de un hipergrafo dirigido. En concreto, hemos desarrollado dos algoritmos para este problema y hemos determinado que son correctos y cuál es su complejidad computacional. Ambos algoritmos han sido evaluados empíricamente para comparar sus tiempos de ejecución. Para la evaluación, hemos producido una selección de hipergrafos dirigidos generados de forma aleatoria inspirados en modelos muy conocidos de grafos aleatorios como Erdos-Renyi, Newman-Watts-Strogatz and Barabasi-Albert. Varias optimizaciones para ambos algoritmos han sido implementadas y analizadas en la tesis. En concreto, colapsar los componentes fuertemente conexos del grafo dirigido que se puede construir eliminando ciertas hiperaristas complejas del hipergrafo dirigido original, mejora notablemente los tiempos de ejecucion de los algoritmos para varios de los hipergrafos utilizados en la evaluación. Aparte de los ejemplos de aplicación mencionados anteriormente, los hipergrafos dirigidos han sido también empleados en el área de representación de conocimiento. En concreto, este tipo de hipergrafos se han usado para el cálculo de módulos de ontologías. Una ontología puede ser definida como un conjunto de axiomas que especifican formalmente un conjunto de símbolos y sus relaciones, mientras que un modulo puede ser entendido como un subconjunto de axiomas de la ontología que recoge todo el conocimiento que almacena la ontología sobre un conjunto especifico de símbolos y sus relaciones. En la tesis nos hemos centrado solamente en módulos que han sido calculados usando la técnica de localidad sintáctica. Debido a que las ontologías pueden ser muy grandes, el cálculo de módulos puede facilitar las tareas de re-utilización y mantenimiento de dichas ontologías. Sin embargo, analizar todos los posibles módulos de una ontología es, en general, muy costoso porque el numero de módulos crece de forma exponencial con respecto al número de símbolos y de axiomas de la ontología. Afortunadamente, los axiomas de una ontología pueden ser divididos en particiones conocidas como átomos. Cada átomo representa un conjunto máximo de axiomas que siempre aparecen juntos en un modulo. La decomposición atómica de una ontología es definida como un grafo dirigido de tal forma que cada nodo del grafo corresponde con un átomo y cada arista define una dependencia entre una pareja de átomos. En esta tesis introducimos el concepto de“axiom dependency hypergraph” que generaliza el concepto de descomposición atómica de una ontología. Un modulo en una ontología correspondería con un componente conexo en este tipo de hipergrafos y un átomo de una ontología con un componente fuertemente conexo. Hemos adaptado la implementación de nuestros algoritmos para que funcionen también con axiom dependency hypergraphs y poder de esa forma calcular los átomos de una ontología. Para demostrar la viabilidad de esta idea, hemos incorporado nuestros algoritmos en una aplicación que hemos desarrollado para la extracción de módulos y la descomposición atómica de ontologías. A la aplicación la hemos llamado HyS y hemos estudiado sus tiempos de ejecución usando una selección de ontologías muy conocidas del área biomédica, la mayoría disponibles en el portal de Internet NCBO. Los resultados de la evaluación muestran que los tiempos de ejecución de HyS son mucho mejores que las aplicaciones más rápidas conocidas. ABSTRACT Directed hypergraphs are an intuitive modelling formalism that have been used in problems related to propositional logic, relational databases, computational linguistic and machine learning. Directed hypergraphs are also presented as an alternative to directed (bipartite) graphs to facilitate the study of the interactions between components of complex systems that cannot naturally be modelled as binary relations. In this context, they are known as hyper-networks. A directed hypergraph is a generalization of a directed graph suitable for representing many-to-many relationships. While an edge in a directed graph defines a relation between two nodes of the graph, a hyperedge in a directed hypergraph defines a relation between two sets of nodes. Strong-connectivity is an equivalence relation that induces a partition of the set of nodes of a directed hypergraph into strongly-connected components. These components can be collapsed into single nodes. As result, the size of the original hypergraph can significantly be reduced if the strongly-connected components have many nodes. This approach might contribute to better understand how the nodes of a hypergraph are connected, in particular when the hypergraphs are large. In the case of directed graphs, there are efficient algorithms that can be used to compute the strongly-connected components of large graphs. For instance, it has been shown that the macroscopic structure of the World Wide Web can be represented as a “bow-tie” diagram where more than 70% of the nodes are distributed into three large sets and one of these sets is a large strongly-connected component. This particular structure has been also observed in complex networks in other fields such as, e.g., biology. Similar studies cannot be conducted in a directed hypergraph because there does not exist any algorithm for computing the strongly-connected components of the hypergraph. In this thesis, we investigate ways to compute the strongly-connected components of directed hypergraphs. We present two new algorithms and we show their correctness and computational complexity. One of these algorithms is inspired by Tarjan’s algorithm for directed graphs. The second algorithm follows a simple approach to compute the stronglyconnected components. This approach is based on the fact that two nodes of a graph that are strongly-connected can also reach the same nodes. In other words, the connected component of each node is the same. Both algorithms are empirically evaluated to compare their performances. To this end, we have produced a selection of random directed hypergraphs inspired by existent and well-known random graphs models like Erd˝os-Renyi and Newman-Watts-Strogatz. Besides the application examples that we mentioned earlier, directed hypergraphs have also been employed in the field of knowledge representation. In particular, they have been used to compute the modules of an ontology. An ontology is defined as a collection of axioms that provides a formal specification of a set of terms and their relationships; and a module is a subset of an ontology that completely captures the meaning of certain terms as defined in the ontology. In particular, we focus on the modules computed using the notion of syntactic locality. As ontologies can be very large, the computation of modules facilitates the reuse and maintenance of these ontologies. Analysing all modules of an ontology, however, is in general not feasible as the number of modules grows exponentially in the number of terms and axioms of the ontology. Nevertheless, the modules can succinctly be represented using the Atomic Decomposition of an ontology. Using this representation, an ontology can be partitioned into atoms, which are maximal sets of axioms that co-occur in every module. The Atomic Decomposition is then defined as a directed graph such that each node correspond to an atom and each edge represents a dependency relation between two atoms. In this thesis, we introduce the notion of an axiom dependency hypergraph which is a generalization of the atomic decomposition of an ontology. A module in the ontology corresponds to a connected component in the hypergraph, and the atoms of the ontology to the strongly-connected components. We apply our algorithms for directed hypergraphs to axiom dependency hypergraphs and in this manner, we compute the atoms of an ontology. To demonstrate the viability of this approach, we have implemented the algorithms in the application HyS which computes the modules of ontologies and calculate their atomic decomposition. In the thesis, we provide an experimental evaluation of HyS with a selection of large and prominent biomedical ontologies, most of which are available in the NCBO Bioportal. HyS outperforms state-of-the-art implementations in the tasks of extracting modules and computing the atomic decomposition of these ontologies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Second-order Lagrangian densities admitting a first-order Hamiltonian formalism are studied; namely, i) necessary and sufficient conditions for the Poincaré–Cartan form of a second-order Lagrangian on an arbitrary fibred manifold p : E → N to be projectable onto J 1 E are explicitly determined; ii) for each of such Lagrangians, a first-order Hamiltonian formalism is developed and a new notion of regularity is introduced; iii) the variational problems of this class defined by regular Lagrangians areprovedtobeinvolutive

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The role of matter has remained central to the making and the thinking of architecture, yet many attempts to capture its essence have been trapped in a dialectic tension between form and materiality, between material consistency and immaterial modes of perception, between static states and dynamic processes, between the real and the virtual, thus advancing an increasing awareness of the perplexing complexity of the material world. Within that complexity, the notion of agency – emerging from and within ecological, politico-economic and socio-cultural processes – calls for a reconceptualization of matter, and consequently processes of materialisation, offering a new understanding of context and space, approached as a field of dynamic relationships. In this context, cutting across boundaries between architectural discourse and practice as well as across the vast trans-disciplinary territory, this dissertation aims to illustrate a variety of design methodologies that have derived from the relational approach. More specifically, the intention is to offer new insights into spatial epistemologies embedded within the notion of atmosphere – commonly associated with the so-called New Phenomenology – and to reflect upon its implications for architectural production. In what follows, the intended argumentation has a twofold dimension. First, through a scrutiny of the notion of atmosphere, the aim is to explore ways of thinking and shaping reality through relations, thus acknowledging the aforementioned complexity of the material universe disclosed through human and non-human as well as material and immaterial forces. Secondly, despite the fact that concerns for atmospherics have flourished over the last few decades, the objective is to reveal that the conceptual foundations and procedures for the production of atmosphere might be found beneath the surface of contemporary debates. Hence, in order to unfold and illustrate previously advocated assumptions, an archaeological approach is adopted, tracing a particular projective genealogy, one that builds upon an atmospheric awareness. Accordingly, in tracing such an atmospheric awareness the study explores the notoriously ambiguous nature and the twofold dimension of atmosphere – meteorological and aesthetic – and the heterogeneity of meanings embedded in them. In this context, the notion of atmosphere is presented as parallactic. It transgresses the formal and material boundaries of bodies. It calls for a reevaluation of perceptual experience, opening a new gap that exposes the orthodox space-bodyenvironment relationships to questioning. It offers to architecture an expanded domain in which to manifest itself, defining architectural space as a contingent construction and field of engagement, and presenting matter as a locus of production/performance/action. Consequently, it is such an expanded or relational dimension that constitutes the foundation of what in the context of this study is to be referred to as affective tectonics. Namely, a tectonics that represents processual and experiential multiplicity of convergent time and space, body and environment, the material and the immaterial; a tectonics in which matter neither appears as an inert and passive substance, nor is limited to the traditionally regarded tectonic significance or expressive values, but is presented as an active element charged with inherent potential and vitality. By defining such a relational materialism, the intention is to expand the spectrum of material attributes revealing the intrinsic relationships between the physical properties of materials and their performative, transformative and affective capacities, including effects of interference and haptic dynamics – i.e. protocols of transmission and interaction. The expression that encapsulates its essence is: ACTIVE MATERIALITY RESUMEN El significado de la materia ha estado desde siempre ligado al pensamiento y el quehacer arquitectónico. Sin embargo, muchos intentos de capturar su esencia se han visto sumergidos en una tensión dialéctica entre la forma y la materialidad, entre la consistencia material y los modos inmateriales de la percepción, entre los estados estáticos y los procesos dinámicos, entre lo real y lo virtual, revelando una creciente conciencia de la desconcertante complejidad del mundo material. En esta complejidad, la noción de la operatividad o capacidad agencial– que emerge desde y dentro de los procesos ecológicos, políticos y socio-culturales– requiere de una reconceptualización de la materia y los procesos inherentes a la materialización, ofreciendo una nueva visión del contexto y el espacio, entendidos como un campo relacional dinámico. Oscilando entre el discurso arquitectónico y la práctica arquitectónica, y atravesando un extenso territorio trans-disciplinar, el objetivo de la presente tesis es ilustrar la variedad de metodologías proyectuales que emergieron desde este enfoque relacional. Concretamente, la intención es indagar en las epistemologías espaciales vinculadas a la noción de la atmósfera– generalmente asociada a la llamada Nueva Fenomenología–, reflexionando sobre su impacto en la producción arquitectónica. A continuación, el estudio ofrece una doble línea argumental. Primero, a través del análisis crítico de la noción de atmósfera, el objetivo es explorar maneras de pensar y dar forma a la realidad a través de las relaciones, reconociendo la mencionada complejidad del universo material revelado a través de fuerzas humanas y no-humanas, materiales e inmateriales. Segundo, a pesar de que el interés por las atmósferas ha florecido en las últimas décadas, la intención es demostrar que las bases conceptuales y los protocolos proyectuales de la creación de atmósferas se pueden encontrar bajo la superficie de los debates contemporáneos. Para corroborar e ilustrar estas hipótesis se propone una metodología de carácter arqueológico, trazando una particular genealogía de proyectos– la que se basa en una conciencia atmosférica. Asimismo, al definir esta conciencia atmosférica, el estudio explora tanto la naturaleza notoriamente ambigua y la dimensión dual de la atmósfera– meteorológica y estética–, como la heterogeneidad de significados derivados de ellas. En este contexto, la atmósfera se entiende como un concepto detonante, ya que sobrepasa los limites formales y materiales de los cuerpos, llevando a la re-evaluación de la experiencia perceptiva y abriendo a preguntas la ortodoxa relación espacio- cuerpo-ambiente. En consecuencia, la noción de la atmósfera ofrece a la arquitectura una dimensión expandida donde manifestarse, definiendo el espacio como una construcción contingente, performativa y afectiva, y presentando la materia como locus de producción/ actuación/ acción. Es precisamente esta dimensión expandida relacional la que constituye una base para lo que en el contexto del presente estudio se define como una tectónica afectiva. Es decir, una tectónica que representa una multiplicidad procesual y experiencial derivada de la convergencia entre el tiempo y el espacio, el cuerpo y el entorno, lo material y lo inmaterial; una tectónica en la que la materia no aparece como una substancia pasiva e inerte, ni es limitada al significado considerado tradicionalmente constructivo o a sus valores expresivos, sino que se presenta como elemento activo cargado de un potencial y vitalidad inherentes. A través de la definición de este tipo de materialismo afectivo, la intención es expandir el espectro de los atributos materiales, revelando las relaciones intrínsecas entre las propiedades físicas de los materiales y sus capacidades performativas, transformativas y afectivas, incluyendo efectos de interferencias y dinámicas hápticas– o dicho de otro modo, protocolos de transmisión e interacción. Una expresión que encapsula su esencia vendría a ser: MATERIALIDAD ACTIVA

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: This project’s idea arose derived of the need of the professors of the department “Computer Languages and Systems and Software Engineering (DLSIIS)” to develop exams with multiple choice questions in a more productive and comfortable way than the one they are currently using. The goal of this project is to develop an application that can be easily used by the professors of the DLSIIS when they need to create a new exam. The main problems of the previous creation process were the difficulty in searching for a question that meets some specific conditions in the previous exam files; and the difficulty for editing exams because of the format of the employed text files. Result: The results shown in this document allow the reader to understand how the final application works and how it addresses successfully every customer need. The elements that will help the reader to understand the application are the structure of the application, the design of the different components, diagrams that show the workflow of the application and some selected fragments of code. Conclusions: The goals stated in the application requirements are finally met. In addition, there are some thoughts about the work performed during the development of the application and how it improved the author skills in web development.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper is a critical examination of Alfred North Whitehead's attempt to solve the traditional problem of evil. Whitehead's conception of evil is crucial to his process cosmology because it is integral to his process cosmology because it is integral to his notion of creation in which evil is understood in relationship to the larger dynamic of God’s creative activity. While Whitehead’s process theodicy is interesting, he fails to successfully escape between the horns of the traditional dilemma. Whitehead is often criticized for treating evil as merely apparent. While some process philosophers, notably Maurice Barineau, have defended Whitehead from this charge, it can be shown that this is an implication of Whitehead’s approach. Moreover, Whitehead’s theodicy fails to address radical moral evil in its concrete dimension in respect to real human suffering. As a result, Whitehead’s theodicy is not relevant to Christian theology. My paper is divided into two parts. I will first briefly discuss the traditional problem of evil and some of the traditional problem of evil and some of the traditional solutions proposed to resolve it. The reminder of the paper will demonstrate why Whitehead’s theodicy addresses the traditional problem of evil only at the expense of theological irrelevancy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Sequence analysis based on multiple isolates representing essentially all genera and species of the classic family Volvocaeae has clarified their phylogenetic relationships. Cloned internal transcribed spacer sequences (ITS-1 and ITS-2, flanking the 5.8S gene of the nuclear ribosomal gene cistrons) were aligned, guided by ITS transcript secondary structural features, and subjected to parsimony and neighbor joining distance analysis. Results confirm the notion of a single common ancestor, and Chlamydomonas reinharditii alone among all sequenced green unicells is most similar. Interbreeding isolates were nearest neighbors on the evolutionary tree in all cases. Some taxa, at whatever level, prove to be clades by sequence comparisons, but others provide striking exceptions. The morphological species Pandorina morum, known to be widespread and diverse in mating pairs, was found to encompass all of the isolates of the four species of Volvulina. Platydorina appears to have originated early and not to fall within the genus Eudorina, with which it can sometimes be confused by morphology. The four species of Pleodorina appear variously associated with Eudorina examples. Although the species of Volvox are each clades, the genus Volvox is not. The conclusions confirm and extend prior, more limited, studies on nuclear SSU and LSU rDNA genes and plastid-encoded rbcL and atpB. The phylogenetic tree suggests which classical taxonomic characters are most misleading and provides a framework for molecular studies of the cell cycle-related and other alterations that have engendered diversity in both vegetative and sexual colony patterns in this classical family.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study solutions of the two-dimensional quasi-geostrophic thermal active scalar equation involving simple hyperbolic saddles. There is a naturally associated notion of simple hyperbolic saddle breakdown. It is proved that such breakdown cannot occur in finite time. At large time, these solutions may grow at most at a quadruple-exponential rate. Analogous results hold for the incompressible three-dimensional Euler equation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Modern functional neuroimaging methods, such as positron-emission tomography (PET), optical imaging of intrinsic signals, and functional MRI (fMRI) utilize activity-dependent hemodynamic changes to obtain indirect maps of the evoked electrical activity in the brain. Whereas PET and flow-sensitive MRI map cerebral blood flow (CBF) changes, optical imaging and blood oxygenation level-dependent MRI map areas with changes in the concentration of deoxygenated hemoglobin (HbR). However, the relationship between CBF and HbR during functional activation has never been tested experimentally. Therefore, we investigated this relationship by using imaging spectroscopy and laser-Doppler flowmetry techniques, simultaneously, in the visual cortex of anesthetized cats during sensory stimulation. We found that the earliest microcirculatory change was indeed an increase in HbR, whereas the CBF increase lagged by more than a second after the increase in HbR. The increased HbR was accompanied by a simultaneous increase in total hemoglobin concentration (Hbt), presumably reflecting an early blood volume increase. We found that the CBF changes lagged after Hbt changes by 1 to 2 sec throughout the response. These results support the notion of active neurovascular regulation of blood volume in the capillary bed and the existence of a delayed, passive process of capillary filling.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The transcription factor, B-cell-specific activator protein (BSAP), represses the murine immunoglobulin heavy-chain 3' enhancer 3' alpha E(hs1,2) in B cells. Analysis of various 3'alpha E deletional constructs indicates that sequences flanking a and b BSAP-binding sites are essential for appropriate regulation of the enhancer. An octamer motif 5' of the a site and a specific G-rich motif 3' of the b site were identified by competition in electrophoretic mobility-shift assays and methylation-interference foot-printing analysis. Site-directed mutagenesis of either the octamer or G-rich sites resulted in the complete release of repression of 3' alpha E(hs1,2), implicating these two motifs in the repression of this enhancer in B cells. However, when both BSAP-binding sites were mutated, the octamer and G-rich motifs functioned as activators. Moreover, in plasma cells, when BSAP is not expressed, 3' alpha E(hs1,2) is active, and its activity depends on the presence of the other two factors. These results suggest that in B cells, 3' alpha E (hs1,2) is down-regulated by the concerted actions of BSAP, octamer, and G-rich DNA-binding proteins. Supporting this notion of concerted repression, a physical interaction between BSAP and octamer-binding proteins was demonstrated using glutathione S-transferase fusion proteins. Thus, concerted repression of 3' alpha E (hs1,2) in B cells provides a sensitive mechanism by which this enhancer, either individually or as part of a locus-controlling region, is highly responsive to any of several participating factors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

According to the classical calcium hypothesis of synaptic transmission, the release of neurotransmitter from presynaptic terminals occurs through an exocytotic process triggered by depolarization-induced presynaptic calcium influx. However, evidence has been accumulating in the last two decades indicating that, in many preparations, synaptic transmitter release can persist or even increase when calcium is omitted from the perfusing saline, leading to the notion of a "calcium-independent release" mechanism. Our study shows that the enhancement of synaptic transmission between photoreceptors and horizontal cells of the vertebrate retina induced by low-calcium media is caused by an increase of calcium influx into presynaptic terminals. This paradoxical effect is accounted for by modifications of surface potential on the photoreceptor membrane. Since lowering extracellular calcium concentration may likewise enhance calcium influx into other nerve cells, other experimental observations of "calcium-independent" release may be reaccommodated within the framework of the classical calcium hypothesis without invoking unconventional processes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the 29 years following \"Our Common Future\" by the United Nations, there is considerable debate among governments, civil society, interest groups and business organisations about what constitutes sustainable development, which constitutes evidence for a contested discourse concerning sustainability. The purpose of this study is to understand this debate in the developing economic context of Brazil, and in particular, to understand and critique the social and environmental accounting [SEA] discursive constructions relating to the State-owned, Petrobras as well as to understand the Brazilian literature on SEA. The discourse theory [DT]-based analysis employs rhetorical redescription to analyse twenty-two reports from Petrobras from 2004-2013. I investigate the political notions by employing the methodological framework of the Logics of Critical Explanation [LCE]. LCE engenders five methodological steps: problematisation, retroduction, logics (social, political and fantasmatic), articulation and critique. The empirical discussion suggests that the hegemony of economic development operates to obfuscate, rhetorically, the development of sustainability, so as to maintain the core business of Petrobras conceived as capital accumulation. Equally, these articulations also illustrate how the constructions of SEA operate to serve the company\'s purpose with few (none) profound changes in integration of sustainability. The Brazilian literature on SEA sustains the status quo of neo-liberal market policies that operate to protect the dominant business case approach to maintain an agenda of wealth-creation in relation to social and environmental needs. The articulations of the case manifested in policies regarding, for example, corruption, which involved over-payments for contracts and unsustainable practices relating to the use of fossil fuels and demonstrated that there was antagonism between action and disclosure. The corruption scandal that emerged after SEA disclosures highlighted the rhetorical nature of disclosure when financial resources were subtracted from the company for political parties and engineering contractors hid facts through incomplete disclosures. The articulations of SEA misrepresent a broader context of the meanings associated with sustainability, which restricted the constructions of SEA to principally serve and represent the intention of the most powerful groups. The significance of SEA, then is narrowed to represent particular interests. The study argues for more critical studies as limited Brazilian literature concerning SEA kept a \'safe distance\' from substantively critiquing the constructions of SEA and its articulations in the Brazilian context. The literature review and the Petrobras\' case illustrate a variety of naming, instituting and articulatory practices that endeavour to maintain the current hegemony of development in an emerging economy, which allows Petrobras to continue to exercise significant profit at the expense of the social and environmental. The constructed idea of development in Petrobras\' discourses emphasises a rhetoric of wider development, but, in reality, these discourses were the antithesis of political, social and ethical developmental issues. These constructions aim to hide struggles between social inequalities and exploitation of natural resources and constitute excuses about a fanciful notion of rhetorical and hegemonic neo-liberal development. In summary, this thesis contributes to the prior literature in five ways: (i) the addition of DT to the analysis of SEA enhances the discussion of political elements such as hegemony, antagonism, logic of equivalence/difference, ideology and articulation; (ii) the analysis of an emerging economy such as Brazil incorporates a new perspective of the discussion of the discourses of SEA and development; (iii) this thesis includes a focus on rhetoric to discuss the maintenance of the status quo; (iv) the holistic structure of the LCE approach enlarges the understanding of the social, political and fantasmatic logics of SEA studies and; (v) this thesis combines an analysis of the literature and the case of Petrobras to characterise and critique the state of the Brazilian academy and its impacts and reflections on the significance of SEA. This thesis, therefore, argues for more critical studies in the Brazilian academy due to the persistence of idea of SEA and development that takes-for-granted deep exclusions and contradictions and provide little space for critiques.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

High-quality software, delivered on time and budget, constitutes a critical part of most products and services in modern society. Our government has invested billions of dollars to develop software assets, often to redevelop the same capability many times. Recognizing the waste involved in redeveloping these assets, in 1992 the Department of Defense issued the Software Reuse Initiative. The vision of the Software Reuse Initiative was "To drive the DoD software community from its current "re-invent the software" cycle to a process-driven, domain-specific, architecture-centric, library-based way of constructing software.'' Twenty years after issuing this initiative, there is evidence of this vision beginning to be realized in nonembedded systems. However, virtually every large embedded system undertaken has incurred large cost and schedule overruns. Investigations into the root cause of these overruns implicates reuse. Why are we seeing improvements in the outcomes of these large scale nonembedded systems and worse outcomes in embedded systems? This question is the foundation for this research. The experiences of the Aerospace industry have led to a number of questions about reuse and how the industry is employing reuse in embedded systems. For example, does reuse in embedded systems yield the same outcomes as in nonembedded systems? Are the outcomes positive? If the outcomes are different, it may indicate that embedded systems should not use data from nonembedded systems for estimation. Are embedded systems using the same development approaches as nonembedded systems? Does the development approach make a difference? If embedded systems develop software differently from nonembedded systems, it may mean that the same processes do not apply to both types of systems. What about the reuse of different artifacts? Perhaps there are certain artifacts that, when reused, contribute more or are more difficult to use in embedded systems. Finally, what are the success factors and obstacles to reuse? Are they the same in embedded systems as in nonembedded systems? The research in this dissertation is comprised of a series of empirical studies using professionals in the aerospace and defense industry as its subjects. The main focus has been to investigate the reuse practices of embedded systems professionals and nonembedded systems professionals and compare the methods and artifacts used against the outcomes. The research has followed a combined qualitative and quantitative design approach. The qualitative data were collected by surveying software and systems engineers, interviewing senior developers, and reading numerous documents and other studies. Quantitative data were derived from converting survey and interview respondents' answers into coding that could be counted and measured. From the search of existing empirical literature, we learned that reuse in embedded systems are in fact significantly different from nonembedded systems, particularly in effort in model based development approach and quality where the development approach was not specified. The questionnaire showed differences in the development approach used in embedded projects from nonembedded projects, in particular, embedded systems were significantly more likely to use a heritage/legacy development approach. There was also a difference in the artifacts used, with embedded systems more likely to reuse hardware, test products, and test clusters. Nearly all the projects reported using code, but the questionnaire showed that the reuse of code brought mixed results. One of the differences expressed by the respondents to the questionnaire was the difficulty in reuse of code for embedded systems when the platform changed. The semistructured interviews were performed to tell us why the phenomena in the review of literature and the questionnaire were observed. We asked respected industry professionals, such as senior fellows, fellows and distinguished members of technical staff, about their experiences with reuse. We learned that many embedded systems used heritage/legacy development approaches because their systems had been around for many years, before models and modeling tools became available. We learned that reuse of code is beneficial primarily when the code does not require modification, but, especially in embedded systems, once it has to be changed, reuse of code yields few benefits. Finally, while platform independence is a goal for many in nonembedded systems, it is certainly not a goal for the embedded systems professionals and in many cases it is a detriment. However, both embedded and nonembedded systems professionals endorsed the idea of platform standardization. Finally, we conclude that while reuse in embedded systems and nonembedded systems is different today, they are converging. As heritage embedded systems are phased out, models become more robust and platforms are standardized, reuse in embedded systems will become more like nonembedded systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This dissertation investigates China’s recent shift in its climate change policy with a refined discourse approach. Methodologically, by adopting a neo-Gramscian notion of hegemony, a generative definition of discourse and an ontological pluralist position, the study constructs a theoretical framework named “discursive hegemony” that identifies the “social forces” for enabling social change and focuses on the role of discursive mechanisms via which the forces operate and produce effects. The key empirical finding of this study was that it was a co-evolution of conditions that shaped the outcome as China’s climate policy shift. In examining the case, a before-after within-case comparison was designed to analyze the variations in the material, institutional, and ideational conditions, with methods including interviews, conventional narrative/text analysis and descriptive statistics. Specifically, changes in energy use, the structure of decision-making body, and the narratives about sustainable development reflected how the above three types of social force processed in China in the first few years of the 21st century, causing the economic development agenda to absorb the climate issue, and turning the policy frame for the latter from mainly a diplomatic matter to a potential opportunity for better-quality growth. With the discursive operation of the “Science-based development”, China’s energy policy has been a good example of the Chinese understanding of sustainability characterized by economic primacy, ecological viability and social green-engineering. This way of discursive evolution, however, is a double-edged sword that has pushed forward some fast, top-down mitigation measures on the one hand, but has also created and will likely continue creating social and ecological havoc on the other hand. The study makes two major contributions. First and on the empirical level, because China is an international actor that was not expected to cooperate on the climate issue according to major IR theories, this study would add one critical case to the studies on global (environmental) governance and the ideational approach in the IR discipline. Second and on the theory-building level, the model of discursive hegemony can be a causally deeper mode of explanation because it traces the process of co-evolution of social forces.