45 resultados para Optics in computing


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Distributed parallel execution systems speed up applications by splitting tasks into processes whose execution is assigned to different receiving nodes in a high-bandwidth network. On the distributing side, a fundamental problem is grouping and scheduling such tasks such that each one involves sufñcient computational cost when compared to the task creation and communication costs and other such practical overheads. On the receiving side, an important issue is to have some assurance of the correctness and characteristics of the code received and also of the kind of load the particular task is going to pose, which can be specified by means of certificates. In this paper we present in a tutorial way a number of general solutions to these problems, and illustrate them through their implementation in the Ciao multi-paradigm language and program development environment. This system includes facilities for parallel and distributed execution, an assertion language for specifying complex programs properties (including safety and resource-related properties), and compile-time and run-time tools for performing automated parallelization and resource control, as well as certification of programs with resource consumption assurances and efñcient checking of such certificates.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Low resources in many African locations do not allow many African scientists and physicians to access the latest advances in technology. This deficiency hinders the daily life of African professionals that often cannot afford, for instance, the cost of internet fees or software licenses. The AFRICA BUILD project, funded by the European Commission and formed by four European and four African institutions, intends to provide advanced computational tools to African institutions in order to solve current technological limitations. In the context of AFRICA BUILD we have carried out, a series of experiments to test the feasibility of using Cloud Computing technologies in two different locations in Africa: Egypt and Burundi. The project aims to create a virtual platform to provide access to a wide range of biomedical informatics and learning resources to professionals and researchers in Africa.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Access to information and continuous education represent critical factors for physicians and researchers over the world. For African professionals, this situation is even more problematic due to the frequently difficult access to technological infrastructures and basic information. Both education and information technologies (e.g., including hardware, software or networking) are expensive and unaffordable for many African professionals. Thus, the use of e-learning and an open approach to information exchange and software use have been already proposed to improve medical informatics issues in Africa. In this context, the AFRICA BUILD project, supported by the European Commission, aims to develop a virtual platform to provide access to a wide range of biomedical informatics and learning resources to professionals and researchers in Africa. A consortium of four African and four European partners work together in this initiative. In this framework, we have developed a prototype of a cloud-computing infrastructure to demonstrate, as a proof of concept, the feasibility of this approach. We have conducted the experiment in two different locations in Africa: Burundi and Egypt. As shown in this paper, technologies such as cloud computing and the use of open source medical software for a large range of case present significant challenges and opportunities for developing countries, such as many in Africa.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Optics and LEDs, design Methods, design examples, conclusions

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Freeform surfaces are the key of the state-of-the-art nonimaging optics to solve the challenges in concentration photovoltaics. Different families (FK, XR, FRXI) will be presented, based on the SMS 3D design method and Köhler homogenization.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A 5-day training in Nonimaging Optics for European SME’s employees was carried out in June 2012 in the framework of the FP7 funded Support Action "SMETHODS". The training combined theoretical introduction and hands-on practice. The experience was very positive, and the lessons learned will improve the next scheduled sessions. Introduction The FP7 funded Support Action "SMETHODS" [1] is an initiative of seven European academic institutions to strengthen Europe's optics and photonics industry, which has started on 1 September 2011. Participation in training sessions is free for participants, who are selected with priority will be given to employees of small and medium sized European enterprises (SMEs). The consortium in SMETHODS is formed by seven partners that are the most prominent academic institutions in optical design in their countries. Through fully integrated collaborative training sessions, the consortium provides professional assistance as well as hands-on training in a variety of design tasks in four domains: (1) imaging optics, (2) nonimaging optics, (3) wave optics, and (4) diffractive optics. For each of this domains domain, 5-day training sessions are scheduled to be hold in different locations throughout Europe, four times in two years, the teach four times in a 2.5 years period.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A new methodology to study irregular behaviours in logic cells is reported. It is based on two types of diagrams, namely phase and working diagrams. Sets of four bits are grouped and represented by their hexadecimal equivalent. Some hexadecimal numbers correspond to certain logic functions. The influence of the internal and external tolerances, namely those appearing in the employed devices and in the working signals, may be analysed with this method. Its importance in the case of logic structures with chaotic behaviours is studied.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

La computación molecular es una disciplina que se ocupa del diseño e implementación de dispositivos para el procesamiento de información sobre un sustrato biológico, como el ácido desoxirribonucleico (ADN), el ácido ribonucleico (ARN) o las proteínas. Desde que Watson y Crick descubrieron en los años cincuenta la estructura molecular del ADN en forma de doble hélice, se desencadenaron otros descubrimientos, como las enzimas de restricción o la reacción en cadena de la polimerasa (PCR), contribuyendo de manera determinante a la irrupción de la tecnología del ADN recombinante. Gracias a esta tecnología y al descenso vertiginoso de los precios de secuenciación y síntesis del ADN, la computación biomolecular pudo abandonar su concepción puramente teórica. El trabajo presentado por Adleman (1994) logró resolver un problema de computación NP-completo (El Problema del Camino de Hamilton dirigido) utilizando únicamente moléculas de ADN. La gran capacidad de procesamiento en paralelo ofrecida por las técnicas del ADN recombinante permitió a Adleman ser capaz de resolver dicho problema en tiempo polinómico, aunque a costa de un consumo exponencial de moléculas de ADN. Utilizando algoritmos de fuerza bruta similares al utilizado por Adleman se logró resolver otros problemas NP-completos, como por ejemplo el de Satisfacibilidad de Fórmulas Lógicas / SAT (Lipton, 1995). Pronto se comprendió que la computación biomolecular no podía competir en velocidad ni precisión con los ordenadores de silicio, por lo que su enfoque y objetivos se centraron en la resolución de problemas con aplicación biomédica (Simmel, 2007), dejando de lado la resolución de problemas clásicos de computación. Desde entonces se han propuesto diversos modelos de dispositivos biomoleculares que, de forma autónoma (sin necesidad de un bio-ingeniero realizando operaciones de laboratorio), son capaces de procesar como entrada un sustrato biológico y proporcionar una salida también en formato biológico: procesadores que aprovechan la extensión de la polimerasa (Hagiya et al., 1997), autómatas que funcionan con enzimas de restricción (Benenson et al., 2001) o con deoxiribozimas (Stojanovic et al., 2002), o circuitos de hibridación competitiva (Yurke et al., 2000). Esta tesis presenta un conjunto de modelos de dispositivos de ácidos nucleicos capaces de implementar diversas operaciones de computación lógica aprovechando técnicas de computación biomolecular (hibridación competitiva del ADN y reacciones enzimáticas) con aplicaciones en diagnóstico genético. El primer conjunto de modelos, presentados en el Capítulo 5 y publicados en Sainz de Murieta and Rodríguez-Patón (2012b), Rodríguez-Patón et al. (2010a) y Sainz de Murieta and Rodríguez-Patón (2010), define un tipo de biosensor que usa hebras simples de ADN para codificar reglas sencillas, como por ejemplo "SI hebra-ADN-1 Y hebra-ADN-2 presentes, ENTONCES enfermedad-B". Estas reglas interactúan con señales de entrada (ADN o ARN de cualquier tipo) para producir una señal de salida (también en forma de ácido nucleico). Dicha señal de salida representa un diagnóstico, que puede medirse mediante partículas fluorescentes técnicas FRET) o incluso ser un tratamiento administrado en respuesta a un conjunto de síntomas. El modelo presentado en el Capítulo 5, publicado en Rodríguez-Patón et al. (2011), es capaz de ejecutar cadenas de resolución sobre fórmulas lógicas en forma normal conjuntiva. Cada cláusula de una fórmula se codifica en una molécula de ADN. Cada proposición p se codifica asignándole una hebra simple de ADN, y la correspondiente hebra complementaria a la proposición ¬p. Las cláusulas se codifican incluyendo distintas proposiciones en la misma hebra de ADN. El modelo permite ejecutar programas lógicos de cláusulas Horn aplicando múltiples iteraciones de resolución en cascada, con el fin de implementar la función de un nanodispositivo autónomo programable. Esta técnica también puede emplearse para resolver SAP sin ayuda externa. El modelo presentado en el Capítulo 6 se ha publicado en publicado en Sainz de Murieta and Rodríguez-Patón (2012c), y el modelo presentado en el Capítulo 7 se ha publicado en (Sainz de Murieta and Rodríguez-Patón, 2013c). Aunque explotan métodos de computación biomolecular diferentes (hibridación competitiva de ADN en el Capítulo 6 frente a reacciones enzimáticas en el 7), ambos modelos son capaces de realizar inferencia Bayesiana. Funcionan tomando hebras simples de ADN como entrada, representando la presencia o la ausencia de un indicador molecular concreto (una evidencia). La probabilidad a priori de una enfermedad, así como la probabilidad condicionada de una señal (o síntoma) dada la enfermedad representan la base de conocimiento, y se codifican combinando distintas moléculas de ADN y sus concentraciones relativas. Cuando las moléculas de entrada interaccionan con las de la base de conocimiento, se liberan dos clases de hebras de ADN, cuya proporción relativa representa la aplicación del teorema de Bayes: la probabilidad condicionada de la enfermedad dada la señal (o síntoma). Todos estos dispositivos pueden verse como elementos básicos que, combinados modularmente, permiten la implementación de sistemas in vitro a partir de sensores de ADN, capaces de percibir y procesar señales biológicas. Este tipo de autómatas tienen en la actualidad una gran potencial, además de una gran repercusión científica. Un perfecto ejemplo fue la publicación de (Xie et al., 2011) en Science, presentando un autómata biomolecular de diagnóstico capaz de activar selectivamente el proceso de apoptosis en células cancerígenas sin afectar a células sanas.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents the rationale to build up a Telematics Engineering curriculum. Telematics is a strongly computing oriented area; then, the authors have initially intended to apply the common requirements described in the computing curricula elaborated by the ACM/EEEE-CS Joint Curriculum Task Force. This experience has revealed some problematic aspects in the ACM/IEEE-CS proposal. From the analysis of these problems, a model to guide the selection and specially the approach of the Telematics curriculum contents is proposed. This model can be easily generalized to other strongly computing oriented curricula, whose number is growing everyday

Relevância:

40.00% 40.00%

Publicador:

Resumo:

La informática se está convirtiendo en la quinta utilidad (gas, agua, luz, teléfono) en parte debido al impacto de Cloud Computing en las mayorías de las organizaciones. Este uso de informática es usada por cada vez más tipos de sistemas, incluidos Sistemas Críticos. Esto tiene un impacto en la complejidad internad y la fiabilidad de los sistemas de la organización y los que se ofrecen a los clientes. Este trabajo investiga el uso de Cloud Computing por sistemas críticos, centrándose en las dependencias y especialmente en la fiabilidad de estos sistemas. Se han presentado algunos ejemplos de su uso, y aunque su utilización en sistemas críticos no está extendido, se presenta cual puede llegar a ser su impacto. El objetivo de este trabajo es primero definir un modelo que pueda representar de una forma cuantitativa las interdependencias en fiabilidad y interdependencia para las organizaciones que utilicen estos sistemas, y aplicar este modelo en un sistema crítico del campo de sanidad y mostrar sus resultados. Los conceptos de “macro-dependability” y “micro-dependability” son introducidos en el modelo para la definición de interdependencia y para analizar la fiabilidad de sistemas que dependen de otros sistemas. ABSTRACT With the increasing utilization of Internet services and cloud computing by most organizations (both private and public), it is clear that computing is becoming the 5th utility (along with water, electricity, telephony and gas). These technologies are used for almost all types of systems, and the number is increasing, including Critical Infrastructure systems. Even if Critical Infrastructure systems appear not to rely directly on cloud services, there may be hidden inter-dependencies. This is true even for private cloud computing, which seems more secure and reliable. The critical systems can began in some cases with a clear and simple design, but evolved as described by Egan to "rafted" networks. Because they are usually controlled by one or few organizations, even when they are complex systems, their dependencies can be understood. The organization oversees and manages changes. These CI systems have been affected by the introduction of new ICT models like global communications, PCs and the Internet. Even virtualization took more time to be adopted by Critical systems, due to their strategic nature, but once that these technologies have been proven in other areas, at the end they are adopted as well, for different reasons such as costs. A new technology model is happening now based on some previous technologies (virtualization, distributing and utility computing, web and software services) that are offered in new ways and is called cloud computing. The organizations are migrating more services to the cloud; this will have impact in their internal complexity and in the reliability of the systems they are offering to the organization itself and their clients. Not always this added complexity and associated risks to their reliability are seen. As well, when two or more CI systems are interacting, the risks of one can affect the rest, sharing the risks. This work investigates the use of cloud computing by critical systems, and is focused in the dependencies and reliability of these systems. Some examples are presented together with the associated risks. A framework is introduced for analysing the dependability and resilience of a system that relies on cloud services and how to improve them. As part of the framework, the concepts of micro and macro dependability are introduced to explain the internal and external dependability on services supplied by an external cloud. A pharmacovigilance model system has been used for framework validation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

With the advancement of Information and Communication Technology ICT which favors increasingly fast, easy, and accessible communication for all and which can reach large groups of people, there have been changes, in recent years in our society that have modified the way we interact, communicate and transmit information. Access to this, it is possible, not only through computers situated in a fixed location, but new mobile devices make it available, wherever the user happens to be located. Now, information "travels" with the user. These forms of communication, transmission and access to information, have also affected the way to conceive and manage business. To these new forms of business that the Internet has brought, is now added the concept of companies in the Cloud Computing ClC. The ClC technology is based on the supply and consumption of services on demand and pay per use, and it gives a 180 degree turn to the business management concept. Small and large businesses may use the latest developments in ICT, to manage their organizations without the need for expensive investments in them. This will enable enterprises to focus more specifically within the scope of their business, leaving the ICT control to the experts. We believe that education can also and should benefit from these new philosophies. ?Due to the global economic crisis in general and each country in particular, economic cutbacks have come to most universities. These are seen in the need to raise tuition rates, which makes increasingly fewer students have the opportunity to pursue higher education?. In this paper we propose using ClC technologies in universities and we make a dissertation on the advantages that it can provide to both: universities and students. For the universities, we expose two focuses, one: ?to reorganize university ICT structures with the ClC philosophy? and the other one, ?to extend the offer of the university education with education on demand?. Regarding the former we propose to use public or private Clouds, to reuse resources across the education community, to save costs on infrastructure investment, in upgrades and in maintenance of ICT, and paying only for what you use and with the ability to scale according to needs. Regarding the latter, we propose an educational model in the ClC, to increase the current university offerings, using educational units in the form of low-cost services and where students pay only for the units consumed on demand. For the students, they could study at any university in the world (virtually), from anywhere, without travel costs: money and time, and what is most important paying only for what they consume. We think that this proposal of education on demand may represent a great change in the current educational model, because strict registration deadlines disappear, and also the problem of economically disadvantaged students, who will not have to raise large amounts of money for an annual tuition. Also it will decrease the problem of loss of the money invested in an enrollment when the student dropout. In summary we think that this proposal is interesting for both, universities and students, we aim for "Higher education from anywhere, with access from any mobile device, at any time, without requiring large investments for students, and with reuse and optimization of resources by universities. Cost by consumption and consumption by service?. We argue for a Universal University "wisdom and knowledge accessible to all?

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This is the final report on reproducibility@xsede, a one-day workshop held in conjunction with XSEDE14, the annual conference of the Extreme Science and Engineering Discovery Environment (XSEDE). The workshop's discussion-oriented agenda focused on reproducibility in large-scale computational research. Two important themes capture the spirit of the workshop submissions and discussions: (1) organizational stakeholders, especially supercomputer centers, are in a unique position to promote, enable, and support reproducible research; and (2) individual researchers should conduct each experiment as though someone will replicate that experiment. Participants documented numerous issues, questions, technologies, practices, and potentially promising initiatives emerging from the discussion, but also highlighted four areas of particular interest to XSEDE: (1) documentation and training that promotes reproducible research; (2) system-level tools that provide build- and run-time information at the level of the individual job; (3) the need to model best practices in research collaborations involving XSEDE staff; and (4) continued work on gateways and related technologies. In addition, an intriguing question emerged from the day's interactions: would there be value in establishing an annual award for excellence in reproducible research? Overview

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Monge-Ampére equation method could be the most advanced point source algorithm of freeform optics design. This paper introduces this method, and outlines two key issues that should be tackles to improve this method.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

It has been shown that cloud computing brings cost benefits and promotes efficiency in the operations of the organizations, no matter what their type or size. However, few public organizations are benefiting from this new paradigm shift in the way the organizations consume and manage computational resources. The objective of this thesis is to analyze both internal and external factors that may influence the adoption of cloud computing by public organizations and propose possible strategies that can assist these organizations in their path to cloud usage. In order to achieve this objective, a SWOT analysis has been conducted, detecting internal factors (strengths and weaknesses) and external factors (opportunities and threats) that can influence the adoption of a governmental cloud. With the application of a TOWS matrix, by combining the internal and external factors, a list of possible strategies have been formulated to be used as a guide to decision-making related to the transition to a cloud environment.