961 resultados para Abstraction.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Implementation of human rights is often criticized because it is perceived as being imposed on the rest of the world. In this case, human rights start to be seen as a sole abstraction, an empty word. What are the theoretical arguments of these critics and can we determine any historical grounds for them? In this paper, I will try to point at similar critics after the French Revolution – like that of the Historical School and Hegel – and try to show if some of these critics are still relevant. And I will compare these critics with contemporary arguments of cultural relativists. There are different streams and categorizations of human rights theories in today’s world. What differentiates them is basically the source of the human rights. After the French Revolution, the historical school had criticized the individuation and Hegel had criticized the formal freedom which was, according to him, a consequence of the Revolution. In this context Hegel drew a distinction between real freedom and formal freedom. Besides the theory of sources, the theories of implementation such as human rights as a model of learning, human rights as a result of an historical process are worth attention. The crucial point is about integrating human rights as an inner process and not to use them as a tool for intervention in other countries, which we observe in today’s world. And this is the exact point why I find the discussion of the sources more important. This discussion can help us to show how the inner evaluation of a society makes the realization of human rights possible and how we can avoid the above mentioned abstraction and misuse.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When designing systems that are complex, dynamic and stochastic in nature, simulation is generally recognised as one of the best design support technologies, and a valuable aid in the strategic and tactical decision making process. A simulation model consists of a set of rules that define how a system changes over time, given its current state. Unlike analytical models, a simulation model is not solved but is run and the changes of system states can be observed at any point in time. This provides an insight into system dynamics rather than just predicting the output of a system based on specific inputs. Simulation is not a decision making tool but a decision support tool, allowing better informed decisions to be made. Due to the complexity of the real world, a simulation model can only be an approximation of the target system. The essence of the art of simulation modelling is abstraction and simplification. Only those characteristics that are important for the study and analysis of the target system should be included in the simulation model. The purpose of simulation is either to better understand the operation of a target system, or to make predictions about a target system’s performance. It can be viewed as an artificial white-room which allows one to gain insight but also to test new theories and practices without disrupting the daily routine of the focal organisation. What you can expect to gain from a simulation study is very well summarised by FIRMA (2000). His idea is that if the theory that has been framed about the target system holds, and if this theory has been adequately translated into a computer model this would allow you to answer some of the following questions: · Which kind of behaviour can be expected under arbitrarily given parameter combinations and initial conditions? · Which kind of behaviour will a given target system display in the future? · Which state will the target system reach in the future? The required accuracy of the simulation model very much depends on the type of question one is trying to answer. In order to be able to respond to the first question the simulation model needs to be an explanatory model. This requires less data accuracy. In comparison, the simulation model required to answer the latter two questions has to be predictive in nature and therefore needs highly accurate input data to achieve credible outputs. These predictions involve showing trends, rather than giving precise and absolute predictions of the target system performance. The numerical results of a simulation experiment on their own are most often not very useful and need to be rigorously analysed with statistical methods. These results then need to be considered in the context of the real system and interpreted in a qualitative way to make meaningful recommendations or compile best practice guidelines. One needs a good working knowledge about the behaviour of the real system to be able to fully exploit the understanding gained from simulation experiments. The goal of this chapter is to brace the newcomer to the topic of what we think is a valuable asset to the toolset of analysts and decision makers. We will give you a summary of information we have gathered from the literature and of the experiences that we have made first hand during the last five years, whilst obtaining a better understanding of this exciting technology. We hope that this will help you to avoid some pitfalls that we have unwittingly encountered. Section 2 is an introduction to the different types of simulation used in Operational Research and Management Science with a clear focus on agent-based simulation. In Section 3 we outline the theoretical background of multi-agent systems and their elements to prepare you for Section 4 where we discuss how to develop a multi-agent simulation model. Section 5 outlines a simple example of a multi-agent system. Section 6 provides a collection of resources for further studies and finally in Section 7 we will conclude the chapter with a short summary.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work proposes to adjust the Notification Oriented Paradigm (NOP) so that it provides support to fuzzy concepts. NOP is inspired by elements of imperative and declarative paradigms, seeking to solve some of the drawbacks of both. By decomposing an application into a network of smaller computational entities that are executed only when necessary, NOP eliminates the need to perform unnecessary computations and helps to achieve better logical-causal uncoupling, facilitating code reuse and application distribution over multiple processors or machines. In addition, NOP allows to express the logical-causal knowledge at a high level of abstraction, through rules in IF-THEN format. Fuzzy systems, in turn, perform logical inferences on causal knowledge bases (IF-THEN rules) that can deal with problems involving uncertainty. Since PON uses IF-THEN rules in an alternative way, reducing redundant evaluations and providing better decoupling, this research has been carried out to identify, propose and evaluate the necessary changes to be made on NOP allowing to be used in the development of fuzzy systems. After that, two fully usable materializations were created: a C++ framework, and a complete programming language (LingPONFuzzy) that provide support to fuzzy inference systems. From there study cases have been created and several tests cases were conducted, in order to validate the proposed solution. The test results have shown a significant reduction in the number of rules evaluated in comparison to a fuzzy system developed using conventional tools (frameworks), which could represent an improvement in performance of the applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La nueva generación de la Web, la Web Semántica, plantea potenciales oportunidades para dotar de significado a los contenidos Web. Las ontologías constituyen una de las principales herramientas para especificar explícitamente los conceptos de un dominio concreto, sus propiedades y sus relaciones; de manera que la información se publique en formatos que sean inteligibles automáticamente por agentes máquinas que pueden localizar y gestionar de forma precisa la información. En este trabajo se presenta un marco de trabajo para una red de ontologías para representar conceptos, atributos, operaciones y restricciones, en relación a los ítems curriculares que se usan en procesos nacionales de categorización de docentes universitarios ecuatorianos. En una primera parte se muestra el contexto del dominio, trabajos relacionados, luego se describe el proceso seguido, la abstracción del modelo ontológico y finalmente se presenta una ontología. Es una ontología de dominio debido a que proporciona el significado de los conceptos y sus relaciones dentro del dominio de ítems curriculares producidos por docentes universitarios, que son requisitos de los proceso de categorización docente universitaria en Ecuador.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Home Automation holds the potential of realizing cost savings for end users while reducing the carbon footprint of domestic energy consumption. Yet, adoption is still very low. High cost of vendor-supplied home automation systems is a major prohibiting factor. Open source systems such as FHEM, Domoticz, OpenHAB etc. are a cheaper alternative and can drive the adoption of home automation. Moreover, they have the advantage of not being limited to a single vendor or communication technology which gives end users flexibility in the choice of devices to include in their installation. However, interaction with devices having diverse communication technologies can be inconvenient for users thus limiting the utility they derive from it. For application developers, creating applications which interact with the several technologies in the home automation systems is not a consistent process. Hence, there is the need for a common description mechanism that makes interaction smooth for end users and which enables application developers to make home automation applications in a consistent and uniform way. This thesis proposes such a description mechanism within the context of an open source home automation system – FHEM, together with a system concept for its application. A mobile application was developed as a proof of concept of the proposed description mechanism and the results of the implementation are reflected upon.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Variable Data Printing (VDP) has brought new flexibility and dynamism to the printed page. Each printed instance of a specific class of document can now have different degrees of customized content within the document template. This flexibility comes at a cost. If every printed page is potentially different from all others it must be rasterized separately, which is a time-consuming process. Technologies such as PPML (Personalized Print Markup Language) attempt to address this problem by dividing the bitmapped page into components that can be cached at the raster level, thereby speeding up the generation of page instances. A large number of documents are stored in Page Description Languages at a higher level of abstraction than the bitmapped page. Much of this content could be reused within a VDP environment provided that separable document components can be identified and extracted. These components then need to be individually rasterisable so that each high-level component can be related to its low-level (bitmap) equivalent. Unfortunately, the unstructured nature of most Page Description Languages makes it difficult to extract content easily. This paper outlines the problems encountered in extracting component-based content from existing page description formats, such as PostScript, PDF and SVG, and how the differences between the formats affects the ease with which content can be extracted. The techniques are illustrated with reference to a tool called COG Extractor, which extracts content from PDF and SVG and prepares it for reuse.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditional organic chemistry has long been dominated by ground state thermal reactions. The alternative to this is excited state chemistry, which uses light to drive chemical transformations. There is considerable interest in using this clean renewable energy source due to concerns surrounding the combustion byproducts associated with the consumption of fossil fuels. The work presented in this text will focus on the use of light (both ultraviolet and visible) for the following quantitative chemical transformations: (1) the release of compounds containing carboxylic acid and alcohol functional groups and (2) the conversion of carbon dioxide into other useable chemicals. Chapters 1-3 will introduce and explore the use of photoremovable protecting groups (PPGs) for the spatiotemporal control of molecular concentrations. Two new PPGs are discussed, the 2,2,2-tribromoethoxy group for the protection of carboxylic acids and the 9-phenyl-9-tritylone group for the protection of alcohols. Fundamental interest in the factors that affect C–X bond breaking has driven the work presented in this text for the release of carboxylic acid substrates. Product analysis from the UV photolysis of 2,2,2-tribromoethyl-(2′-phenylacetate) in various solvents results in the formation of H–atom abstraction products as well as the release of phenylacetic acid. The deprotection of alcohols is realized through the use of UV or visible light photolysis of 9-phenyl-9-tritylone ethers. Central to this study is the use of photoinduced electron transfer chemistry for the generation of ion diradicals capable of undergoing bond-breaking chemistry leading to the release of the alcohol substrates. Chapters 4 and 5 will explore the use of N-heterocyclic carbenes (NHCs) as a catalyst for the photochemical reduction of carbon dioxide. Previous experiments have demonstrated that NHCs can add to CO2 to form stable zwitterionic species known as N-heterocylic-2-carboxylates (NHC–CO2). Work presented in this text illustrate that the stability of these species is highly dependent on solvent polarity, consistent with a lengthening of the imidazolium to carbon dioxide bond (CNHC–CCO2). Furthermore, these adducts interact with excited state electron donors resulting in the generation of ion diradicals capable of converting carbon dioxide into formic acid.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade de Comunicação, Programa de Pós-Graduação em Comunicação, 2016.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade de Comunicação, Programa de Pós-Graduação em Comunicação, 2016.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade de Comunicação, Programa de Pós-Graduação em Comunicação, 2016.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Réalisé en cotutelle avec l'École normale supérieure de Cachan – Université Paris-Saclay

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of robots has shown itself as a very complex interdisciplinary research field. The predominant procedure for these developments in the last decades is based on the assumption that each robot is a fully personalized project, with the direct embedding of hardware and software technologies in robot parts with no level of abstraction. Although this methodology has brought countless benefits to the robotics research, on the other hand, it has imposed major drawbacks: (i) the difficulty to reuse hardware and software parts in new robots or new versions; (ii) the difficulty to compare performance of different robots parts; and (iii) the difficulty to adapt development needs-in hardware and software levels-to local groups expertise. Large advances might be reached, for example, if physical parts of a robot could be reused in a different robot constructed with other technologies by other researcher or group. This paper proposes a framework for robots, TORP (The Open Robot Project), that aims to put forward a standardization in all dimensions (electrical, mechanical and computational) of a robot shared development model. This architecture is based on the dissociation between the robot and its parts, and between the robot parts and their technologies. In this paper, the first specification for a TORP family and the first humanoid robot constructed following the TORP specification set are presented, as well as the advances proposed for their improvement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Employees are the human capital which, to a great extent, contributes to the success and development of high-performance and sustainable organizations. In a work environment, there is a need to provide a tool for tracking and following-up on each employees' professional progress, while staying aligned with the organization’s strategic and operational goals and objectives. The research work within this Thesis aims to contribute to improve employees' selfawareness and auto-regulation; two predominant research areas are also studied and analyzed: Visual Analytics and Gamification. The Visual Analytics enables the specification of personalized dashboard interfaces with alerts and indicators to keep employees aware of their skills and to continuously monitor how to improve their expertise, promoting simultaneously behavioral change and adoption of good-practices. The study of Gamification techniques with Talent Management features enabled the design of new processes to engage, motivate, and retain highly productive employees, and to foster a competitive working environment, where employees are encouraged to be involved in new and rewarding activities, where knowledge and experience are recognized as a relevant asset. The Design Science Research was selected as the research methodology; the creation of new knowledge is therefore based on an iterative cycle addressing concepts such as design, analysis, reflection, and abstraction. By collaborating in an international project (Active@Work), funded by the Active and Assisted Living Programme, the results followed a design thinking approach regarding the specification of the structure and behavior of the Skills Development Module, namely the identification of requirements and the design of an innovative info-structure of metadata to support the user experience. A set of mockups were designed based on the user role and main concerns. Such approach enabled the conceptualization of a solution to proactively assist the management and assessment of skills in a personalized and dynamic way. The outcomes of this Thesis aims to demonstrate the existing articulation between emerging research areas such as Visual Analytics and Gamification, expecting to represent conceptual gains in these two research fields.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Instituto de Física, Programa de Pós-Graduação em Física, 2015.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Laser-induced room temperature luminescence of air-equilibrated benzophenone/O-propylated p-tert-butylcalix[ 4] arene solid powdered samples revealed the existence of a novel emission, in contrast with benzophenone/p-tertbutylcalix[ 4] arene complexes, where only benzophenone emits. This novel emission was identified as phosphorescence of 1-phenyl-1,2-propanedione, which is formed as the result of an hydrogen atom abstraction reaction of the triplet excited benzophenone from the propoxy substituents of the calixarene. Room temperature phosphorescence was obtained in air-equilibrated samples in all propylated hosts. The decay times of the benzophenone emission vary greatly with the degree of propylation, the shortest lifetimes being obtained in the tri- and tetrapropylated calixarenes. Triplet - triplet absorption of benzophenone was detected in all cases, and is the predominant absorption in the p-tert-butylcalix[ 4] arene case, where an endo-calix complex is formed. Benzophenone ketyl radical formation occurs with the O-propylated p-tert-butylcalix[ 4] arenes hosts, suggesting a different type of host/guest molecular arrangement. Diffuse reflectance laser. ash photolysis and gas chromatography - mass spectrometry techniques provided complementary information, the former about transient species and the latter regarding the final products formed after light absorption. Product analysis and identification clearly show that the two main degradation photoproducts following laser excitation in the propylated substrates are 1-phenyl-1,2- propanedione and 2- hydroxybenzophenone, although several other minor photodegradation products were identified. A detailed mechanistic analysis is proposed. While the solution photochemistry of benzophenone is dominated by the hydrogen abstraction reaction from suitable hydrogen donors, in these solid powdered samples, the alpha-cleavage reaction also plays an important role. This finding occurs even with one single laser pulse which lasts only a few nanoseconds, and is apparently related to the fact that scattered radiation exists, due to multiple internal reflections possibly trapping light within non-absorbing microcrystals in the sample, and is detected until at least 20 mus after the laser pulse. This could explain how photoproducts thus formed could also be excited with only one laser pulse.