943 resultados para Space Extended Systems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a Finite Element Model, which has been used for forecasting the diffusion of innovations in time and space. Unlike conventional models used in diffusion literature, the model considers the spatial heterogeneity. The implementation steps of the model are explained by applying it to the case of diffusion of photovoltaic systems in a local region in southern Germany. The applied model is based on a parabolic partial differential equation that describes the diffusion ratio of photovoltaic systems in a given region over time. The results of the application show that the Finite Element Model constitutes a powerful tool to better understand the diffusion of an innovation as a simultaneous space-time process. For future research, model limitations and possible extensions are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

LLas nuevas tecnologías orientadas a la nube, el internet de las cosas o las tendencias "as a service" se basan en el almacenamiento y procesamiento de datos en servidores remotos. Para garantizar la seguridad en la comunicación de dichos datos al servidor remoto, y en el manejo de los mismos en dicho servidor, se hace uso de diferentes esquemas criptográficos. Tradicionalmente, dichos sistemas criptográficos se centran en encriptar los datos mientras no sea necesario procesarlos (es decir, durante la comunicación y almacenamiento de los mismos). Sin embargo, una vez es necesario procesar dichos datos encriptados (en el servidor remoto), es necesario desencriptarlos, momento en el cual un intruso en dicho servidor podría a acceder a datos sensibles de usuarios del mismo. Es más, este enfoque tradicional necesita que el servidor sea capaz de desencriptar dichos datos, teniendo que confiar en la integridad de dicho servidor de no comprometer los datos. Como posible solución a estos problemas, surgen los esquemas de encriptación homomórficos completos. Un esquema homomórfico completo no requiere desencriptar los datos para operar con ellos, sino que es capaz de realizar las operaciones sobre los datos encriptados, manteniendo un homomorfismo entre el mensaje cifrado y el mensaje plano. De esta manera, cualquier intruso en el sistema no podría robar más que textos cifrados, siendo imposible un robo de los datos sensibles sin un robo de las claves de cifrado. Sin embargo, los esquemas de encriptación homomórfica son, actualmente, drás-ticamente lentos comparados con otros esquemas de encriptación clásicos. Una op¬eración en el anillo del texto plano puede conllevar numerosas operaciones en el anillo del texto encriptado. Por esta razón, están surgiendo distintos planteamientos sobre como acelerar estos esquemas para un uso práctico. Una de las propuestas para acelerar los esquemas homomórficos consiste en el uso de High-Performance Computing (HPC) usando FPGAs (Field Programmable Gate Arrays). Una FPGA es un dispositivo semiconductor que contiene bloques de lógica cuya interconexión y funcionalidad puede ser reprogramada. Al compilar para FPGAs, se genera un circuito hardware específico para el algorithmo proporcionado, en lugar de hacer uso de instrucciones en una máquina universal, lo que supone una gran ventaja con respecto a CPUs. Las FPGAs tienen, por tanto, claras difrencias con respecto a CPUs: -Arquitectura en pipeline: permite la obtención de outputs sucesivos en tiempo constante -Posibilidad de tener multiples pipes para computación concurrente/paralela. Así, en este proyecto: -Se realizan diferentes implementaciones de esquemas homomórficos en sistemas basados en FPGAs. -Se analizan y estudian las ventajas y desventajas de los esquemas criptográficos en sistemas basados en FPGAs, comparando con proyectos relacionados. -Se comparan las implementaciones con trabajos relacionados New cloud-based technologies, the internet of things or "as a service" trends are based in data storage and processing in a remote server. In order to guarantee a secure communication and handling of data, cryptographic schemes are used. Tradi¬tionally, these cryptographic schemes focus on guaranteeing the security of data while storing and transferring it, not while operating with it. Therefore, once the server has to operate with that encrypted data, it first decrypts it, exposing unencrypted data to intruders in the server. Moreover, the whole traditional scheme is based on the assumption the server is reliable, giving it enough credentials to decipher data to process it. As a possible solution for this issues, fully homomorphic encryption(FHE) schemes is introduced. A fully homomorphic scheme does not require data decryption to operate, but rather operates over the cyphertext ring, keeping an homomorphism between the cyphertext ring and the plaintext ring. As a result, an outsider could only obtain encrypted data, making it impossible to retrieve the actual sensitive data without its associated cypher keys. However, using homomorphic encryption(HE) schemes impacts performance dras-tically, slowing it down. One operation in the plaintext space can lead to several operations in the cyphertext space. Because of this, different approaches address the problem of speeding up these schemes in order to become practical. One of these approaches consists in the use of High-Performance Computing (HPC) using FPGAs (Field Programmable Gate Array). An FPGA is an integrated circuit designed to be configured by a customer or a designer after manufacturing - hence "field-programmable". Compiling into FPGA means generating a circuit (hardware) specific for that algorithm, instead of having an universal machine and generating a set of machine instructions. FPGAs have, thus, clear differences compared to CPUs: - Pipeline architecture, which allows obtaining successive outputs in constant time. -Possibility of having multiple pipes for concurrent/parallel computation. Thereby, In this project: -We present different implementations of FHE schemes in FPGA-based systems. -We analyse and study advantages and drawbacks of the implemented FHE schemes, compared to related work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Schrödinger’s equation of a three-body system is a linear partial differential equation (PDE) defined on the 9-dimensional configuration space, ℝ9, naturally equipped with Jacobi’s kinematic metric and with translational and rotational symmetries. The natural invariance of Schrödinger’s equation with respect to the translational symmetry enables us to reduce the configuration space to that of a 6-dimensional one, while that of the rotational symmetry provides the quantum mechanical version of angular momentum conservation. However, the problem of maximizing the use of rotational invariance so as to enable us to reduce Schrödinger’s equation to corresponding PDEs solely defined on triangular parameters—i.e., at the level of ℝ6/SO(3)—has never been adequately treated. This article describes the results on the orbital geometry and the harmonic analysis of (SO(3),ℝ6) which enable us to obtain such a reduction of Schrödinger’s equation of three-body systems to PDEs solely defined on triangular parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

By using a simplified model of small open liquid-like clusters with surface effects, in the gas phase, it is shown how the statistical thermodynamics of small systems can be extended to include metastable supersaturated gaseous states not too far from the gas–liquid equilibrium transition point. To accomplish this, one has to distinguish between mathematical divergence and physical convergence of the open-system partition function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To “control” a system is to make it behave (hopefully) according to our “wishes,” in a way compatible with safety and ethics, at the least possible cost. The systems considered here are distributed—i.e., governed (modeled) by partial differential equations (PDEs) of evolution. Our “wish” is to drive the system in a given time, by an adequate choice of the controls, from a given initial state to a final given state, which is the target. If this can be achieved (respectively, if we can reach any “neighborhood” of the target) the system, with the controls at our disposal, is exactly (respectively, approximately) controllable. A very general (and fuzzy) idea is that the more a system is “unstable” (chaotic, turbulent) the “simplest,” or the “cheapest,” it is to achieve exact or approximate controllability. When the PDEs are the Navier–Stokes equations, it leads to conjectures, which are presented and explained. Recent results, reported in this expository paper, essentially prove the conjectures in two space dimensions. In three space dimensions, a large number of new questions arise, some new results support (without proving) the conjectures, such as generic controllability and cases of decrease of cost of control when the instability increases. Short comments are made on models arising in climatology, thermoelasticity, non-Newtonian fluids, and molecular chemistry. The Introduction of the paper and the first part of all sections are not technical. Many open questions are mentioned in the text.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wheat (Triticum aestivum L.), rice (Oryza sativa L.), and maize (Zea mays L.) provide about two-thirds of all energy in human diets, and four major cropping systems in which these cereals are grown represent the foundation of human food supply. Yield per unit time and land has increased markedly during the past 30 years in these systems, a result of intensified crop management involving improved germplasm, greater inputs of fertilizer, production of two or more crops per year on the same piece of land, and irrigation. Meeting future food demand while minimizing expansion of cultivated area primarily will depend on continued intensification of these same four systems. The manner in which further intensification is achieved, however, will differ markedly from the past because the exploitable gap between average farm yields and genetic yield potential is closing. At present, the rate of increase in yield potential is much less than the expected increase in demand. Hence, average farm yields must reach 70–80% of the yield potential ceiling within 30 years in each of these major cereal systems. Achieving consistent production at these high levels without causing environmental damage requires improvements in soil quality and precise management of all production factors in time and space. The scope of the scientific challenge related to these objectives is discussed. It is concluded that major scientific breakthroughs must occur in basic plant physiology, ecophysiology, agroecology, and soil science to achieve the ecological intensification that is needed to meet the expected increase in food demand.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Soil vapor extraction (SVE)systems can be used to remediate enviornmental sites that have been contaminated with petroleum products. However, SVE systems rely on pore space in soils to draw the vapors through the soil, creating a vacuum. Therefore, SVE systems are not as effective when used in low permeability soils. This study aims to determine whether SVE systems can be used on low permeability soils in conjunction with companion technologies. The results indicate that SVE systems can be utilized in low permeability soils if used in conjunction with companion technologies that increase soil permeability and cantaminant volatilization. The promising companion technology is six-phase soil heating, based on contamination removal rate and cost estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a multilayered architecture that enhances the capabilities of current QA systems and allows different types of complex questions or queries to be processed. The answers to these questions need to be gathered from factual information scattered throughout different documents. Specifically, we designed a specialized layer to process the different types of temporal questions. Complex temporal questions are first decomposed into simple questions, according to the temporal relations expressed in the original question. In the same way, the answers to the resulting simple questions are recomposed, fulfilling the temporal restrictions of the original complex question. A novel aspect of this approach resides in the decomposition which uses a minimal quantity of resources, with the final aim of obtaining a portable platform that is easily extensible to other languages. In this paper we also present a methodology for evaluation of the decomposition of the questions as well as the ability of the implemented temporal layer to perform at a multilingual level. The temporal layer was first performed for English, then evaluated and compared with: a) a general purpose QA system (F-measure 65.47% for QA plus English temporal layer vs. 38.01% for the general QA system), and b) a well-known QA system. Much better results were obtained for temporal questions with the multilayered system. This system was therefore extended to Spanish and very good results were again obtained in the evaluation (F-measure 40.36% for QA plus Spanish temporal layer vs. 22.94% for the general QA system).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The original motivation for this paper was to provide an efficient quantitative analysis of convex infinite (or semi-infinite) inequality systems whose decision variables run over general infinite-dimensional (resp. finite-dimensional) Banach spaces and that are indexed by an arbitrary fixed set J. Parameter perturbations on the right-hand side of the inequalities are required to be merely bounded, and thus the natural parameter space is l ∞(J). Our basic strategy consists of linearizing the parameterized convex system via splitting convex inequalities into linear ones by using the Fenchel–Legendre conjugate. This approach yields that arbitrary bounded right-hand side perturbations of the convex system turn on constant-by-blocks perturbations in the linearized system. Based on advanced variational analysis, we derive a precise formula for computing the exact Lipschitzian bound of the feasible solution map of block-perturbed linear systems, which involves only the system’s data, and then show that this exact bound agrees with the coderivative norm of the aforementioned mapping. In this way we extend to the convex setting the results of Cánovas et al. (SIAM J. Optim. 20, 1504–1526, 2009) developed for arbitrary perturbations with no block structure in the linear framework under the boundedness assumption on the system’s coefficients. The latter boundedness assumption is removed in this paper when the decision space is reflexive. The last section provides the aimed application to the convex case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The theory and methods of linear algebra are a useful alternative to those of convex geometry in the framework of Voronoi cells and diagrams, which constitute basic tools of computational geometry. As shown by Voigt and Weis in 2010, the Voronoi cells of a given set of sites T, which provide a tesselation of the space called Voronoi diagram when T is finite, are solution sets of linear inequality systems indexed by T. This paper exploits systematically this fact in order to obtain geometrical information on Voronoi cells from sets associated with T (convex and conical hulls, tangent cones and the characteristic cones of their linear representations). The particular cases of T being a curve, a closed convex set and a discrete set are analyzed in detail. We also include conclusions on Voronoi diagrams of arbitrary sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Commercial off-the-shelf microprocessors are the core of low-cost embedded systems due to their programmability and cost-effectiveness. Recent advances in electronic technologies have allowed remarkable improvements in their performance. However, they have also made microprocessors more susceptible to transient faults induced by radiation. These non-destructive events (soft errors), may cause a microprocessor to produce a wrong computation result or lose control of a system with catastrophic consequences. Therefore, soft error mitigation has become a compulsory requirement for an increasing number of applications, which operate from the space to the ground level. In this context, this paper uses the concept of selective hardening, which is aimed to design reduced-overhead and flexible mitigation techniques. Following this concept, a novel flexible version of the software-based fault recovery technique known as SWIFT-R is proposed. Our approach makes possible to select different registers subsets from the microprocessor register file to be protected on software. Thus, design space is enriched with a wide spectrum of new partially protected versions, which offer more flexibility to designers. This permits to find the best trade-offs between performance, code size, and fault coverage. Three case studies have been developed to show the applicability and flexibility of the proposal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design of fault tolerant systems is gaining importance in large domains of embedded applications where design constrains are as important as reliability. New software techniques, based on selective application of redundancy, have shown remarkable fault coverage with reduced costs and overheads. However, the large number of different solutions provided by these techniques, and the costly process to assess their reliability, make the design space exploration a very difficult and time-consuming task. This paper proposes the integration of a multi-objective optimization tool with a software hardening environment to perform an automatic design space exploration in the search for the best trade-offs between reliability, cost, and performance. The first tool is commanded by a genetic algorithm which can simultaneously fulfill many design goals thanks to the use of the NSGA-II multi-objective algorithm. The second is a compiler-based infrastructure that automatically produces selective protected (hardened) versions of the software and generates accurate overhead reports and fault coverage estimations. The advantages of our proposal are illustrated by means of a complex and detailed case study involving a typical embedded application, the AES (Advanced Encryption Standard).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concepts of substantive beliefs and derived beliefs are defined, a set of substantive beliefs S like open set and the neighborhood of an element substantive belief. A semantic operation of conjunction is defined with a structure of an Abelian group. Mathematical structures exist such as poset beliefs and join-semilattttice beliefs. A metric space of beliefs and the distance of belief depending on the believer are defined. The concepts of closed and opened ball are defined. S′ is defined as subgroup of the metric space of beliefs Σ and S′ is a totally limited set. The term s is defined (substantive belief) in terms of closing of S′. It is deduced that Σ is paracompact due to Stone's Theorem. The pseudometric space of beliefs is defined to show how the metric of the nonbelieving subject has a topological space like a nonmaterial abstract ideal space formed in the mind of the believing subject, fulfilling the conditions of Kuratowski axioms of closure. To establish patterns of materialization of beliefs we are going to consider that these have defined mathematical structures. This will allow us to understand better cultural processes of text, architecture, norms, and education that are forms or the materialization of an ideology. This materialization is the conversion by means of certain mathematical correspondences, of an abstract set whose elements are beliefs or ideas, in an impure set whose elements are material or energetic. Text is a materialization of ideology.