111 resultados para Abstractions


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pós-graduação em Psicologia - FCLAS

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Através do presente estudo, procura-se analisar o Programa Mais Educação a partir do referencial Marxiano e marxista. Procura-se analisar as categorias Educação Integral, Esporte e Lazer, analisar relações das categorias com a discussão sobre o neoliberalismo, a sua influência no Estado brasileiro, além de marcos legais e contextuais. O processo de pesquisa ocorreu através da revisão bibliográfica, desenvolvida com base em materiais já elaborados, constituídos principalmente de livros e artigos científicos. Realizou-se o trato do cruzamento das primeiras abstrações sobre o objeto e o cruzamento da pesquisa documental com a revisão bibliográfica e com a pesquisa de campo. O nosso objetivo geral nesta pesquisa foi analisar os limites e as possibilidades na implementação do Esporte e do Lazer no PME e a problemática central da pesquisa: quais os limites e as possibilidades na implementação do Esporte e do Lazer no Programa Mais Educação na política local, nacional e na escola Esmerina Bou Habib de Abaetetuba/Pará (2008 - 2012)? Analisamos que o Programa Mais Educação tem sido um programa limitado à lógica do capital e do seu processo reprodutivo para o mercado de trabalho e a socialização de valores necessários a esta ordem. Por outro lado seria necessária uma Educação Integral e em tempo integral, na qual a construção de espaços deve ser prioridade, como uma alternativa para favorecer a escola pública, ou seja, é necessário mais tempo em uma nova escola. Percebeu-se que o Programa Mais Educação ainda não teve êxito em avançar na superação da escola de um turno para a construção de uma escola de dois turnos, os investimentos não são suficientes, a lógica da produtividade está presente no programa, inclusive na prioridade dada ao Esporte de rendimento. Percebeu-se que o macrocampo Esporte e Lazer é muito solicitado pelos alunos e que há uma aproximação entre as atividades deste macrocampo e as aulas de educação física, sendo que esta aproximação não deve ocasionar a substituição das aulas de educação física, caso que foi percebido durante a pesquisa de campo. Analisamos que a Educação Integral e em tempo integral será importante para a melhoria da educação no Brasil em termos estruturais, assim como uma melhor organização do trabalho pedagógico, valorização do processo educativo e do professor, aumentar as possibilidades educativas, ou seja, para o processo educativo ser mais qualitativo, crítico e dialético para a formação da classe trabalhadora. Contudo, entende-se que o Esporte deve ser tratado, a partir do paradigma da cultura corporal, como um elemento cultural humano deve ser socializado na escola no contexto da formação do novo homem e da nova mulher. Entende-se que a socialização do conhecimento sistematizado é necessária para a organização da classe trabalhadora em sua luta revolucionária. Nesse sentido, a educação escolar tem papel fundamental na luta pelo socialismo. Contudo, acreditou-se na relevância desta pesquisa, do ponto de vista social, por se estar propondo uma análise ampla e crítica que deverá dar conta do trato científico do objeto e das categorias de análise, afinado com perspectivas substancialmente transformadoras. Pretende-se que esta pesquisa possa contribuir para o debate das categorias Educação Integral, Esporte e Lazer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Background Over the last years, a number of researchers have investigated how to improve the reuse of crosscutting concerns. New possibilities have emerged with the advent of aspect-oriented programming, and many frameworks were designed considering the abstractions provided by this new paradigm. We call this type of framework Crosscutting Frameworks (CF), as it usually encapsulates a generic and abstract design of one crosscutting concern. However, most of the proposed CFs employ white-box strategies in their reuse process, requiring two mainly technical skills: (i) knowing syntax details of the programming language employed to build the framework and (ii) being aware of the architectural details of the CF and its internal nomenclature. Also, another problem is that the reuse process can only be initiated as soon as the development process reaches the implementation phase, preventing it from starting earlier. Method In order to solve these problems, we present in this paper a model-based approach for reusing CFs which shields application engineers from technical details, letting him/her concentrate on what the framework really needs from the application under development. To support our approach, two models are proposed: the Reuse Requirements Model (RRM) and the Reuse Model (RM). The former must be used to describe the framework structure and the later is in charge of supporting the reuse process. As soon as the application engineer has filled in the RM, the reuse code can be automatically generated. Results We also present here the result of two comparative experiments using two versions of a Persistence CF: the original one, whose reuse process is based on writing code, and the new one, which is model-based. The first experiment evaluated the productivity during the reuse process, and the second one evaluated the effort of maintaining applications developed with both CF versions. The results show the improvement of 97% in the productivity; however little difference was perceived regarding the effort for maintaining the required application. Conclusion By using the approach herein presented, it was possible to conclude the following: (i) it is possible to automate the instantiation of CFs, and (ii) the productivity of developers are improved as long as they use a model-based instantiation approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this thesis is to go through different approaches for proving expressiveness properties in several concurrent languages. We analyse four different calculi exploiting for each one a different technique. We begin with the analysis of a synchronous language, we explore the expressiveness of a fragment of CCS! (a variant of Milner's CCS where replication is considered instead of recursion) w.r.t. the existence of faithful encodings (i.e. encodings that respect the behaviour of the encoded model without introducing unnecessary computations) of models of computability strictly less expressive than Turing Machines. Namely, grammars of types 1,2 and 3 in the Chomsky Hierarchy. We then move to asynchronous languages and we study full abstraction for two Linda-like languages. Linda can be considered as the asynchronous version of CCS plus a shared memory (a multiset of elements) that is used for storing messages. After having defined a denotational semantics based on traces, we obtain fully abstract semantics for both languages by using suitable abstractions in order to identify different traces which do not correspond to different behaviours. Since the ability of one of the two variants considered of recognising multiple occurrences of messages in the store (which accounts for an increase of expressiveness) reflects in a less complex abstraction, we then study other languages where multiplicity plays a fundamental role. We consider the language CHR (Constraint Handling Rules) a language which uses multi-headed (guarded) rules. We prove that multiple heads augment the expressive power of the language. Indeed we show that if we restrict to rules where the head contains at most n atoms we could generate a hierarchy of languages with increasing expressiveness (i.e. the CHR language allowing at most n atoms in the heads is more expressive than the language allowing at most m atoms, with m

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Addressing life in borders and refugee camps requires understanding the way these spaces are ruled, the kinds of problems rule poses for the people who live there, and the abilities of inhabitants to remake their own lives. Recent literature on such spaces has been influenced by Agamben's notion of sovereignty, which reduces these spaces and their residents to abstractions. We propose an alternate framework focused on what we call aleatory sovereignty, or rule by chance. This allows us to see camps and borders not only as the outcomes of humanitarian projects but also of anxieties about governance and rule; to see their inhabitants not only as abject recipients of aid, but also as individuals who make decisions and choices in complex conditions; and to show that while the outcome of projects within such spaces is often unpredictable, the assumptions that undergird such projects create regular cycles of implementation and failure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A model of theoretical science is set forth to guide the formulation of general theories around abstract concepts and processes. Such theories permit explanatory application to many phenomena that are not ostensibly alike, and in so doing encompass socially disapproved violence, making special theories of violence unnecessary. Though none is completely adequate for the explanatory job, at least seven examples of general theories that help account for deviance make up the contemporary theoretical repertoire. From them, we can identify abstractions built around features of offenses, aspects of individuals, the nature of social relationships, and different social processes. Although further development of general theories may be hampered by potential indeterminacy of the subject matter and by the possibility of human agency, maneuvers to deal with such obstacles are available.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Debuggers are crucial tools for developing object-oriented software systems as they give developers direct access to the running systems. Nevertheless, traditional debuggers rely on generic mechanisms to explore and exhibit the execution stack and system state, while developers reason about and formulate domain-specific questions using concepts and abstractions from their application domains. This creates an abstraction gap between the debugging needs and the debugging support leading to an inefficient and error-prone debugging effort. To reduce this gap, we propose a framework for developing domain-specific debuggers called the Moldable Debugger. The Moldable Debugger is adapted to a domain by creating and combining domain-specific debugging operations with domain-specific debugging views, and adapts itself to a domain by selecting, at run time, appropriate debugging operations and views. We motivate the need for domain-specific debugging, identify a set of key requirements and show how our approach improves debugging by adapting the debugger to several domains.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Salinization is a soil threat that adversely affects ecosystem services and diminishes soil functions in many arid and semi-arid regions. Soil salinity management depends on a range of factors, and can be complex expensive and time demanding. Besides taking no action, possible management strategies include amelioration and adaptation measures. The WOCAT Technologies Questionnaire is a standardized methodology for monitoring, evaluating and documenting sustainable land management practices through interaction with the stakeholders. Here we use WOCAT for the systematic analysis and evaluation of soil salinization amelioration measures, for the RECARE project Case Study in Greece, the Timpaki basin, a semi-arid region in south-central Crete where the main land use is horticulture in greenhouses irrigated by groundwater. Excessive groundwater abstractions have resulted in a drop of the groundwater level in the coastal part of the aquifer, thus leading to seawater intrusion and in turn to soil salinization due to irrigation with brackish water. Amelioration technologies that have already been applied in the case study by the stakeholders are examined and classified depending on the function they promote and/or improve. The documented technologies are evaluated for their impacts on ecosystem services, cost and input requirements. Preliminary results show that technologies which promote maintaining existing crop types while enhancing productivity and decreasing soil salinity such as composting, mulching, rain water harvesting and seed biopriming are preferred by the stakeholders. Further work will include result validation using qualitative approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Soil salinity management can be complex, expensive, and time demanding, especially in arid and semi-arid regions. Besides taking no action, possible management strategies include amelioration and adaptation measures. Here we apply the World Overview of Conservation Approaches and Technologies (WOCAT) framework for the systematic analysis and evaluation and selection of soil salinisation amelioration technologies in close collaboration with stakeholders. The participatory approach is applied in the RECARE (Preventing and Remediating degradation of soils in Europe through Land Care) project case study of Timpaki, a semiarid region in south-central Crete (Greece) where the main land use is horticulture in greenhouses irrigated by groundwater. Excessive groundwater abstractions have resulted in a drop of the groundwater level in the coastal part of the aquifer, thus leading to seawater intrusion and in turn to soil salinisation. The documented technologies are evaluated for their impacts on ecosystem services, cost, and input requirements using a participatory approach and field evaluations. Results show that technologies which promote maintaining existing crop types while enhancing productivity and decreasing soil salinity are preferred by the stakeholders. The evaluation concludes that rainwater harvesting is the optimal solution for direct soil salinity mitigation, as it addresses a wider range of ecosystem and human well-being benefits. Nevertheless, this merit is offset by poor financial motivation making agronomic measures more attractive to users.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Houston, Texas maintains the appropriate climate and mosquito populations to support the circulation of dengue viruses. The city is susceptible to the introduction and subsequent local transmission of dengue virus with its proximity to dengue-endemic Mexico and the high degree of international travel routed through its airports. In 2008, a study at the University of Texas School of Public Health identified 58 suspected dengue fever cases that presented at hospitals and clinics in the Houston area. Serum or CSF samples of the 58 samples tested positive or equivocal for the presence of anti-dengue IgM antibodies (Rodriguez, 2008). Here, we present the results of an investigation aimed to describe the clinical characteristics of the 58 suspected dengue fever cases and to determine if local transmission had occurred. Data from medical record abstractions and personal telephone interviews were used to describe clinical characteristics and travel history of the suspected cases. Our analysis classified six probable dengue fever cases based on the case definition from the World Health Organization. Three of the probable cases for which we were able to obtain travel history had not recently traveled to an endemic area prior to onset of symptoms suggesting the illnesses were locally acquired in Houston. Further analysis led us to hypothesize that additional cases of dengue fever are present in our study population. Fifty-one percent of the study population was diagnosed with meningitis and/or encephalitis. Sixty percent of the individuals who received a lumbar puncture had abnormal CSF. Together these findings indicate viral infection with neurological involvement, which has been reported to occur with dengue fever. Among the individuals who received liver enzyme analysis, 54% had evidence of abnormal liver enzyme levels, a clinical sign commonly observed with dengue. Our results indicate that a suspected outbreak of dengue fever with autochthonous transmission occurred in the Houston area between 2003 and 2005. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or significant financial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Java’s popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specification for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) profile was developed to define a robust subset of the language that is amenable to static analysis for high-integrity system certification. Currently, a specification under the Java community process (JSR- 302) is being developed. Its purpose is to define those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its profiles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to define appropriate abstractions to overcome this problem. Currently there is no formal specification. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ profile. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the definition of a computational model which identifies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by defining two phases and a specific threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol defining a network interface and modules. The JRMP protocol was modified to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez más importantes para la sociedad. Su demanda aumenta y cada vez más dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar pérdida de vidas humanas, daños en el medio ambiente o cuantiosas pérdidas económicas. La necesidad de satisfacer requisitos temporales estrictos, hace más complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso técnicas adecuadas en su diseño, mantenimiento y certificación. En concreto, se requiere una tecnología flexible e independiente del hardware. La evolución de las redes y paradigmas de comunicación, así como la necesidad de mayor potencia de cómputo y de tolerancia a fallos, ha motivado la interconexión de dispositivos electrónicos. Los mecanismos de comunicación permiten la transferencia de datos con alta velocidad de transmisión. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactúan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecución. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseño. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar ˜ donde se ejecutan puede variar. El lenguaje de programación Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specification for Java), que es una extensión del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitación importante dado que la mayoría de los actuales y futuros sistemas serán distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el fin de definir las abstracciones que aborden dicha limitación, pero en la actualidad aun no existe una especificacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integración entre el modelo de RMI (Remote Method Invocation) y el perfil HRTJ. Ha sido diseñado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la confiabilidad del comportamiento temporal y el uso de recursos. El diseño parte de la definición de un modelo computacional el cual identifica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes más adecuados, el modelo de análisis, y un subconjunto de Java para sistemas de tiempo real crítico. En el diseño, las referencias remotas son el medio básico para construcción de aplicaciones distribuidas las cuales son asociadas a todos los parámetros no funcionales y los recursos necesarios para la ejecución de invocaciones remotas síncronas o asíncronas con atributos de tiempo real. El middleware propuesto separa la asignación de recursos de la propia ejecución definiendo dos fases y un mecanismo de hebras especifico que garantiza un comportamiento temporal adecuado. Además se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red definiendo una interfaz de red y módulos específicos. También se ha modificado el protocolo JRMP para incluir diferentes fases, parámetros no funcionales y optimizaciones de los tamaños de los mensajes. Aunque la serialización es una de las operaciones fundamentales para asegurar la adecuada transmisión de datos, las actuales implementaciones no son adecuadas para sistemas críticos y no hay alternativas. Este trabajo propone una serialización predecible que ha implicado el desarrollo de un nuevo compilador para la generación de código optimizado acorde al modelo computacional. La solución propuesta tiene la ventaja que en tiempo de compilación nos permite planificar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseño e implementación se ha llevado a cabo un exigente proceso de validación con énfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicación industrial desarrollada por Thales Avionics (un sistema de gestión de vuelo) y en las pruebas exhaustivas han demostrado que el diseño y el prototipo son fiables para aplicaciones industriales con estrictos requisitos temporales.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The runtime management of the infrastructure providing service-based systems is a complex task, up to the point where manual operation struggles to be cost effective. As the functionality is provided by a set of dynamically composed distributed services, in order to achieve a management objective multiple operations have to be applied over the distributed elements of the managed infrastructure. Moreover, the manager must cope with the highly heterogeneous characteristics and management interfaces of the runtime resources. With this in mind, this paper proposes to support the configuration and deployment of services with an automated closed control loop. The automation is enabled by the definition of a generic information model, which captures all the information relevant to the management of the services with the same abstractions, describing the runtime elements, service dependencies, and business objectives. On top of that, a technique based on satisfiability is described which automatically diagnoses the state of the managed environment and obtains the required changes for correcting it (e.g., installation, service binding, update, or configuration). The results from a set of case studies extracted from the banking domain are provided to validate the feasibility of this proposa