981 resultados para Multi-sided platform


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heterogeneous multi-core FPGAs contain different types of cores, which can improve efficiency when used with an effective online task scheduler. However, it is not easy to find the right cores for tasks when there are multiple objectives or dozens of cores. Inappropriate scheduling may cause hot spots which decrease the reliability of the chip. Given that, our research builds a simulating platform to evaluate all kinds of scheduling algorithms on a variety of architectures. On this platform, we provide an online scheduler which uses multi-objective evolutionary algorithm (EA). Comparing the EA and current algorithms such as Predictive Dynamic Thermal Management (PDTM) and Adaptive Temperature Threshold Dynamic Thermal Management (ATDTM), we find some drawbacks in previous work. First, current algorithms are overly dependent on manually set constant parameters. Second, those algorithms neglect optimization for heterogeneous architectures. Third, they use single-objective methods, or use linear weighting method to convert a multi-objective optimization into a single-objective optimization. Unlike other algorithms, the EA is adaptive and does not require resetting parameters when workloads switch from one to another. EAs also improve performance when used on heterogeneous architecture. A efficient Pareto front can be obtained with EAs for the purpose of multiple objectives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Яни Чаушев, Милослав Средков, Красимир Манев - Всяко състезание по програмиране използва множество софтуерни инструменти за управление на процесите по време на състезанието. Въпреки че тези инструменти обикновено покриват спецификите на конкретния вид състезание задоволително, те рядко адресират трудностите на дългосрочното съхранение на данни и оперативната съвместимост. В тази статия е представен един софтуерен инструмент адресиращ тези проблеми. Вместо комплексна система, касаеща всички аспекти на състезанието, CORE е централизирано хранилище за съхраняване и поддържане на необходимите за различни състезания данни. Представени са основните му елементи, текущото състояние на реализацията и перспективите за развитие на системата.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Femtosecond laser microfabrication has emerged over the last decade as a 3D flexible technology in photonics. Numerical simulations provide an important insight into spatial and temporal beam and pulse shaping during the course of extremely intricate nonlinear propagation (see e.g. [1,2]). Electromagnetics of such propagation is typically described in the form of the generalized Non-Linear Schrdinger Equation (NLSE) coupled with Drude model for plasma [3]. In this paper we consider a multi-threaded parallel numerical solution for a specific model which describes femtosecond laser pulse propagation in transparent media [4, 5]. However our approach can be extended to similar models. The numerical code is implemented in NVIDIA Graphics Processing Unit (GPU) which provides an effitient hardware platform for multi-threded computing. We compare the performance of the described below parallel code implementated for GPU using CUDA programming interface [3] with a serial CPU version used in our previous papers [4,5]. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Internet has become an integral part of our nation’s critical socio-economic infrastructure. With its heightened use and growing complexity however, organizations are at greater risk of cyber crimes. To aid in the investigation of crimes committed on or via the Internet, a network forensics analysis tool pulls together needed digital evidence. It provides a platform for performing deep network analysis by capturing, recording and analyzing network events to find out the source of a security attack or other information security incidents. Existing network forensics work has been mostly focused on the Internet and fixed networks. But the exponential growth and use of wireless technologies, coupled with their unprecedented characteristics, necessitates the development of new network forensic analysis tools. This dissertation fostered the emergence of a new research field in cellular and ad-hoc network forensics. It was one of the first works to identify this problem and offer fundamental techniques and tools that laid the groundwork for future research. In particular, it introduced novel methods to record network incidents and report logged incidents. For recording incidents, location is considered essential to documenting network incidents. However, in network topology spaces, location cannot be measured due to absence of a ‘distance metric’. Therefore, a novel solution was proposed to label locations of nodes within network topology spaces, and then to authenticate the identity of nodes in ad hoc environments. For reporting logged incidents, a novel technique based on Distributed Hash Tables (DHT) was adopted. Although the direct use of DHTs for reporting logged incidents would result in an uncontrollably recursive traffic, a new mechanism was introduced that overcome this recursive process. These logging and reporting techniques aided forensics over cellular and ad-hoc networks, which in turn increased their ability to track and trace attacks to their source. These techniques were a starting point for further research and development that would result in equipping future ad hoc networks with forensic components to complement existing security mechanisms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hurricane is one of the most destructive and costly natural hazard to the built environment and its impact on low-rise buildings, particularity, is beyond acceptable. The major objective of this research was to perform a parametric evaluation of internal pressure (IP) for wind-resistant design of low-rise buildings and wind-driven natural ventilation applications. For this purpose, a multi-scale experimental, i.e. full-scale at Wall of Wind (WoW) and small-scale at Boundary Layer Wind Tunnel (BLWT), and a Computational Fluid Dynamics (CFD) approach was adopted. This provided new capability to assess wind pressures realistically on internal volumes ranging from small spaces formed between roof tiles and its deck to attic to room partitions. Effects of sudden breaching, existing dominant openings on building envelopes as well as compartmentalization of building interior on the IP were systematically investigated. Results of this research indicated: (i) for sudden breaching of dominant openings, the transient overshooting response was lower than the subsequent steady state peak IP and internal volume correction for low-wind-speed testing facilities was necessary. For example a building without volume correction experienced a response four times faster and exhibited 30–40% lower mean and peak IP; (ii) for existing openings, vent openings uniformly distributed along the roof alleviated, whereas one sided openings aggravated the IP; (iii) larger dominant openings exhibited a higher IP on the building envelope, and an off-center opening on the wall exhibited (30–40%) higher IP than center located openings; (iv) compartmentalization amplified the intensity of IP and; (v) significant underneath pressure was measured for field tiles, warranting its consideration during net pressure evaluations. The study aimed at wind driven natural ventilation indicated: (i) the IP due to cross ventilation was 1.5 to 2.5 times higher for Ainlet/Aoutlet>1 compared to cases where Ainlet/Aoutlet<1, this in effect reduced the mixing of air inside the building and hence the ventilation effectiveness; (ii) the presence of multi-room partitioning increased the pressure differential and consequently the air exchange rate. Overall good agreement was found between the observed large-scale, small-scale and CFD based IP responses. Comparisons with ASCE 7-10 consistently demonstrated that the code underestimated peak positive and suction IP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Internet has become an integral part of our nation's critical socio-economic infrastructure. With its heightened use and growing complexity however, organizations are at greater risk of cyber crimes. To aid in the investigation of crimes committed on or via the Internet, a network forensics analysis tool pulls together needed digital evidence. It provides a platform for performing deep network analysis by capturing, recording and analyzing network events to find out the source of a security attack or other information security incidents. Existing network forensics work has been mostly focused on the Internet and fixed networks. But the exponential growth and use of wireless technologies, coupled with their unprecedented characteristics, necessitates the development of new network forensic analysis tools. This dissertation fostered the emergence of a new research field in cellular and ad-hoc network forensics. It was one of the first works to identify this problem and offer fundamental techniques and tools that laid the groundwork for future research. In particular, it introduced novel methods to record network incidents and report logged incidents. For recording incidents, location is considered essential to documenting network incidents. However, in network topology spaces, location cannot be measured due to absence of a 'distance metric'. Therefore, a novel solution was proposed to label locations of nodes within network topology spaces, and then to authenticate the identity of nodes in ad hoc environments. For reporting logged incidents, a novel technique based on Distributed Hash Tables (DHT) was adopted. Although the direct use of DHTs for reporting logged incidents would result in an uncontrollably recursive traffic, a new mechanism was introduced that overcome this recursive process. These logging and reporting techniques aided forensics over cellular and ad-hoc networks, which in turn increased their ability to track and trace attacks to their source. These techniques were a starting point for further research and development that would result in equipping future ad hoc networks with forensic components to complement existing security mechanisms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-Cloud Applications are composed of services offered by multiple cloud platforms where the user/developer has full knowledge of the use of such platforms. The use of multiple cloud platforms avoids the following problems: (i) vendor lock-in, which is dependency on the application of a certain cloud platform, which is prejudicial in the case of degradation or failure of platform services, or even price increasing on service usage; (ii) degradation or failure of the application due to fluctuations in quality of service (QoS) provided by some cloud platform, or even due to a failure of any service. In multi-cloud scenario is possible to change a service in failure or with QoS problems for an equivalent of another cloud platform. So that an application can adopt the perspective multi-cloud is necessary to create mechanisms that are able to select which cloud services/platforms should be used in accordance with the requirements determined by the programmer/user. In this context, the major challenges in terms of development of such applications include questions such as: (i) the choice of which underlying services and cloud computing platforms should be used based on the defined user requirements in terms of functionality and quality (ii) the need to continually monitor the dynamic information (such as response time, availability, price, availability), related to cloud services, in addition to the wide variety of services, and (iii) the need to adapt the application if QoS violations affect user defined requirements. This PhD thesis proposes an approach for dynamic adaptation of multi-cloud applications to be applied when a service is unavailable or when the requirements set by the user/developer point out that other available multi-cloud configuration meets more efficiently. Thus, this work proposes a strategy composed of two phases. The first phase consists of the application modeling, exploring the similarities representation capacity and variability proposals in the context of the paradigm of Software Product Lines (SPL). In this phase it is used an extended feature model to specify the cloud service configuration to be used by the application (similarities) and the different possible providers for each service (variability). Furthermore, the non-functional requirements associated with cloud services are specified by properties in this model by describing dynamic information about these services. The second phase consists of an autonomic process based on MAPE-K control loop, which is responsible for selecting, optimally, a multicloud configuration that meets the established requirements, and perform the adaptation. The adaptation strategy proposed is independent of the used programming technique for performing the adaptation. In this work we implement the adaptation strategy using various programming techniques such as aspect-oriented programming, context-oriented programming and components and services oriented programming. Based on the proposed steps, we tried to assess the following: (i) the process of modeling and the specification of non-functional requirements can ensure effective monitoring of user satisfaction; (ii) if the optimal selection process presents significant gains compared to sequential approach; and (iii) which techniques have the best trade-off when compared efforts to development/modularity and performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visualization and interpretation of geological observations into a cohesive geological model are essential to Earth sciences and related fields. Various emerging technologies offer approaches to multi-scale visualization of heterogeneous data, providing new opportunities that facilitate model development and interpretation processes. These include increased accessibility to 3D scanning technology, global connectivity, and Web-based interactive platforms. The geological sciences and geological engineering disciplines are adopting these technologies as volumes of data and physical samples greatly increase. However, a standardized and universally agreed upon workflow and approach have yet to properly be developed. In this thesis, the 3D scanning workflow is presented as a foundation for a virtual geological database. This database provides augmented levels of tangibility to students and researchers who have little to no access to locations that are remote or inaccessible. A Web-GIS platform was utilized jointly with customized widgets developed throughout the course of this research to aid in visualizing hand-sized/meso-scale geological samples within a geologic and geospatial context. This context is provided as a macro-scale GIS interface, where geophysical and geodetic images and data are visualized. Specifically, an interactive interface is developed that allows for simultaneous visualization to improve the understanding of geological trends and relationships. These developed tools will allow for rapid data access and global sharing, and will facilitate comprehension of geological models using multi-scale heterogeneous observations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Part 13: Virtual Reality and Simulation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visualization of vector fields plays an important role in research activities nowadays -- Web applications allow a fast, multi-platform and multi-device access to data, which results in the need of optimized applications to be implemented in both high-performance and low-performance devices -- Point trajectory calculation procedures usually perform repeated calculations due to the fact that several points might lie over the same trajectory -- This paper presents a new methodology to calculate point trajectories over highly-dense and uniformly-distributed grid of points in which the trajectories are forced to lie over the points in the grid -- Its advantages rely on a highly parallel computing architecture implementation and in the reduction of the computational effort to calculate the stream paths since unnecessary calculations are avoided, reusing data through iterations -- As case study, the visualization of oceanic currents through in the web platform is presented and analyzed, using WebGL as the parallel computing architecture and the rendering Application Programming Interface

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Technologies for Big Data and Data Science are receiving increasing research interest nowadays. This paper introduces the prototyping architecture of a tool aimed to solve Big Data Optimization problems. Our tool combines the jMetal framework for multi-objective optimization with Apache Spark, a technology that is gaining momentum. In particular, we make use of the streaming facilities of Spark to feed an optimization problem with data from different sources. We demonstrate the use of our tool by solving a dynamic bi-objective instance of the Traveling Salesman Problem (TSP) based on near real-time traffic data from New York City, which is updated several times per minute. Our experiment shows that both jMetal and Spark can be integrated providing a software platform to deal with dynamic multi-optimization problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le conseguenze del management algoritmico sui lavoratori sono note tra gli studiosi, ma poche ricerche indagano le possibilità di agency, soprattutto a livello individuale, nella gig-economy. A partire dalla quotidianità del lavoro, l’obiettivo è analizzare le forme di agency esercitate dai platform workers nel settore della logistica dell'ultimo miglio. La ricerca si basa su un'etnografia multi-situata condotta in due paesi molto distanti e riguardante due diversi servizi urbani di piattaforma: il food-delivery in Italia (Bologna, Torino) e il ride-hailing in Argentina (Buenos Aires). Nonostante le differenze, il lavoro di campo ha mostrato diverse continuità tra i contesti geografici. Innanzitutto, le tecnologie digitali giocano un ruolo ambivalente nell'ambiente di lavoro: se la tecnologia è usata dalle aziende per disciplinare il lavoro, costituisce però anche uno strumento che può essere impiegato a vantaggio dei lavoratori. Sia nel ride-hailing che nelle piattaforme di food-delivery, infatti, i lavoratori esprimono la loro agency condividendo pratiche di rimaneggiamento e tattiche per aggirare il despotismo algoritmico. In secondo luogo, la ricerca ha portato alla luce una gran varietà di attività economiche sviluppate ai margini dell'economia di piattaforma. In entrambi i casi le piattaforme intersecano vivacemente le economie informali urbane e alimentano circuiti informali di lavoro, come evidenziato dall'elevata presenza di scambi illeciti: ad esempio, vendita di account, hacking-bots, caporalato digitale. Tutt'altro che avviare un processo di formalizzazione, quindi, la piattaforma sussume e riproduce l’insieme di condizioni produttive e riproduttive dell'informalità (viração), offrendo impieghi intermittenti e insicuri a una massa di lavoratori-usa-e-getta disponibile al sottoimpiego. In conclusione, le piattaforme vengono definite come infrastrutture barocche, intendendo con il barocco tanto la natura ibrida dell'azione che mescola forme di neoliberismo-dal-basso con pratiche di solidarietà tra pari, quanto la progressiva ristrutturazione dei processi di accumulazione all’insegna di una rinnovata interdipendenza tra formale e informale nelle infrastrutture del «mondo a domicilio».

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The topic of this thesis is the design and the implementation of mathematical models and control system algorithms for rotary-wing unmanned aerial vehicles to be used in cooperative scenarios. The use of rotorcrafts has many attractive advantages, since these vehicles have the capability to take-off and land vertically, to hover and to move backward and laterally. Rotary-wing aircraft missions require precise control characteristics due to their unstable and heavy coupling aspects. As a matter of fact, flight test is the most accurate way to evaluate flying qualities and to test control systems. However, it may be very expensive and/or not feasible in case of early stage design and prototyping. A good compromise is made by a preliminary assessment performed by means of simulations and a reduced flight testing campaign. Consequently, having an analytical framework represents an important stage for simulations and control algorithm design. In this work mathematical models for various helicopter configurations are implemented. Different flight control techniques for helicopters are presented with theoretical background and tested via simulations and experimental flight tests on a small-scale unmanned helicopter. The same platform is used also in a cooperative scenario with a rover. Control strategies, algorithms and their implementation to perform missions are presented for two main scenarios. One of the main contributions of this thesis is to propose a suitable control system made by a classical PID baseline controller augmented with L1 adaptive contribution. In addition a complete analytical framework and the study of the dynamics and the stability of a synch-rotor are provided. At last, the implementation of cooperative control strategies for two main scenarios that include a small-scale unmanned helicopter and a rover.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of extracorporeal organ support (ECOS) devices is increasingly widespread, to temporarily sustain or replace the functions of impaired organs in critically ill patients. Among ECOS, respiratory functions are supplied by extracorporeal life support (ECLS) therapies like extracorporeal membrane oxygenation (ECMO) and extracorporeal carbon dioxide removal (ECCO2R), and renal replacement therapies (RRT) are used to support kidney functions. However, the leading cause of mortality in critically ill patients is multi-organ dysfunction syndrome (MODS), which requires a complex therapeutic strategy where extracorporeal treatments are often integrated to pharmacological approach. Recently, the concept of multi-organ support therapy (MOST) has been introduced, and several forms of isolated ECOS devices are sequentially connected to provide simultaneous support to different organ systems. The future of critical illness goes towards the development of extracorporeal devices offering multiple organ support therapies on demand by a single hardware platform, where treatment lines can be used alternately or in conjunction. The aim of this industrial PhD project is to design and validate a device for multi-organ support, developing an auxiliary line for renal replacement therapy (hemofiltration) to be integrated on a platform for ECCO2R. The intended purpose of the ancillary line, which can be connected on demand, is to remove excess fluids by ultrafiltration and achieve volume control by the infusion of a replacement solution, as patients undergoing respiratory support are particularly prone to develop fluid overload. Furthermore, an ultrafiltration regulation system shall be developed using a powered and software-modulated pinch-valve on the effluent line of the hemofilter, proposed as an alternative to the state-of-the-art solution with peristaltic pump.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although various abutment connections and materials have recently been introduced, insufficient data exist regarding the effect of stress distribution on their mechanical performance. The purpose of this study was to investigate the effect of different abutment materials and platform connections on stress distribution in single anterior implant-supported restorations with the finite element method. Nine experimental groups were modeled from the combination of 3 platform connections (external hexagon, internal hexagon, and Morse tapered) and 3 abutment materials (titanium, zirconia, and hybrid) as follows: external hexagon-titanium, external hexagon-zirconia, external hexagon-hybrid, internal hexagon-titanium, internal hexagon-zirconia, internal hexagon-hybrid, Morse tapered-titanium, Morse tapered-zirconia, and Morse tapered-hybrid. Finite element models consisted of a 4×13-mm implant, anatomic abutment, and lithium disilicate central incisor crown cemented over the abutment. The 49 N occlusal loading was applied in 6 steps to simulate the incisal guidance. Equivalent von Mises stress (σvM) was used for both the qualitative and quantitative evaluation of the implant and abutment in all the groups and the maximum (σmax) and minimum (σmin) principal stresses for the numerical comparison of the zirconia parts. The highest abutment σvM occurred in the Morse-tapered groups and the lowest in the external hexagon-hybrid, internal hexagon-titanium, and internal hexagon-hybrid groups. The σmax and σmin values were lower in the hybrid groups than in the zirconia groups. The stress distribution concentrated in the abutment-implant interface in all the groups, regardless of the platform connection or abutment material. The platform connection influenced the stress on abutments more than the abutment material. The stress values for implants were similar among different platform connections, but greater stress concentrations were observed in internal connections.