502 resultados para runtime assertions
Resumo:
It has been years since the introduction of the Dynamic Network Optimization (DNO) concept, yet the DNO development is still at its infant stage, largely due to a lack of breakthrough in minimizing the lengthy optimization runtime. Our previous work, a distributed parallel solution, has achieved a significant speed gain. To cater for the increased optimization complexity pressed by the uptake of smartphones and tablets, however, this paper examines the potential areas for further improvement and presents a novel asynchronous distributed parallel design that minimizes the inter-process communications. The new approach is implemented and applied to real-life projects whose results demonstrate an augmented acceleration of 7.5 times on a 16-core distributed system compared to 6.1 of our previous solution. Moreover, there is no degradation in the optimization outcome. This is a solid sprint towards the realization of DNO.
Resumo:
El tratado clandestino Origo et fundamenta religionis christianae ataca los fundamentos del cristianismo y propone una religión natural. Pese a que todas las copias manuscritas que lo conservan datan del siglo xVIII, se encuentran suficientes indicios que señalan al silesio Martin Seidel como su autor y documentan la existencia del texto a finales del siglo xVI o principios del xVII. Las primeras fuentes sobre Seidel lo vinculan con los antitrinitarios de Heidelberg (1570), los unitarios polacos (1580) y los “cripto-socinianos” de Altdorf (1610). En este artículo valoro dichas fuentes y corrijo a la luz de las mismas algunas afirmaciones de la crítica reciente sobre Seidel y su Origo.
Resumo:
This talk explores how the runtime system and operating system can leverage metrics that express the significance and resilience of application components in order to reduce the energy footprint of parallel applications. We will explore in particular how software can tolerate and indeed exploit higher error rates in future processors and memory technologies that may operate outside their safe margins.
Resumo:
Reliability has emerged as a critical design constraint especially in memories. Designers are going to great lengths to guarantee fault free operation of the underlying silicon by adopting redundancy-based techniques, which essentially try to detect and correct every single error. However, such techniques come at a cost of large area, power and performance overheads which making many researchers to doubt their efficiency especially for error resilient systems where 100% accuracy is not always required. In this paper, we present an alternative method focusing on the confinement of the resulting output error induced by any reliability issues. By focusing on memory faults, rather than correcting every single error the proposed method exploits the statistical characteristics of any target application and replaces any erroneous data with the best available estimate of that data. To realize the proposed method a RISC processor is augmented with custom instructions and special-purpose functional units. We apply the method on the proposed enhanced processor by studying the statistical characteristics of the various algorithms involved in a popular multimedia application. Our experimental results show that in contrast to state-of-the-art fault tolerance approaches, we are able to reduce runtime and area overhead by 71.3% and 83.3% respectively.
Resumo:
In the past years, we could observe a significant amount of new robotic systems in science, industry, and everyday life. To reduce the complexity of these systems, the industry constructs robots that are designated for the execution of a specific task such as vacuum cleaning, autonomous driving, observation, or transportation operations. As a result, such robotic systems need to combine their capabilities to accomplish complex tasks that exceed the abilities of individual robots. However, to achieve emergent cooperative behavior, multi-robot systems require a decision process that copes with the communication challenges of the application domain. This work investigates a distributed multi-robot decision process, which addresses unreliable and transient communication. This process composed by five steps, which we embedded into the ALICA multi-agent coordination language guided by the PROViDE negotiation middleware. The first step encompasses the specification of the decision problem, which is an integral part of the ALICA implementation. In our decision process, we describe multi-robot problems by continuous nonlinear constraint satisfaction problems. The second step addresses the calculation of solution proposals for this problem specification. Here, we propose an efficient solution algorithm that integrates incomplete local search and interval propagation techniques into a satisfiability solver, which forms a satisfiability modulo theories (SMT) solver. In the third decision step, the PROViDE middleware replicates the solution proposals among the robots. This replication process is parameterized with a distribution method, which determines the consistency properties of the proposals. In a fourth step, we investigate the conflict resolution. Therefore, an acceptance method ensures that each robot supports one of the replicated proposals. As we integrated the conflict resolution into the replication process, a sound selection of the distribution and acceptance methods leads to an eventual convergence of the robot proposals. In order to avoid the execution of conflicting proposals, the last step comprises a decision method, which selects a proposal for implementation in case the conflict resolution fails. The evaluation of our work shows that the usage of incomplete solution techniques of the constraint satisfaction solver outperforms the runtime of other state-of-the-art approaches for many typical robotic problems. We further show by experimental setups and practical application in the RoboCup environment that our decision process is suitable for making quick decisions in the presence of packet loss and delay. Moreover, PROViDE requires less memory and bandwidth compared to other state-of-the-art middleware approaches.
Resumo:
Nurses – along with many others – are often told that they should or must accept and work towards the promotion of social justice. Further, it is claimed that social justice represents or is a shared nursing value. This presentation challenges these assertions. Claims regarding shared values easily fall prey to forms of attribution error. Alternatively, while social justice is sometimes presented as a remedy or alternative to market disutility, the quality of arguments in which this linkage appears leave much to be desired and, in such instances, the robustness of these claims collapse. Or, assertions regarding social justice frequently appear without supporting explanation or justification. It is simply assumed that social justice (inadequately defined) is a ‘good thing’. This is not necessarily a problem. The normative strength of a claim does not rest only upon the arguments put forward in support of it. However, when social justice is advanced as mere assertion, often in a manner devoid of specificity, claims that the concept should be embraced and claims that the concept should or can promote action in the world, lack persuasive force. Moreover, in some articulations, the concept appears to generate illiberal and intolerant consequences. This presentation does not argue for inequality or social injustice. Rather, it suggests that underdeveloped and frail arguments require improvement or dismissal.
Resumo:
Num mundo competitivo com alterações profundas nos processos de trabalho e desempenho, surgem necessidades de novas formas de avaliação e competências profissionais. Estas necessidades promovem um ambiente nem sempre justo e saudável, algo imprescindível para o bem-estar de uma organização. O presente estudo objetiva verificar perceções de justiça organizacional da avaliação de desempenho de duas empresas do sector farmacêutico (empresa A e empresa B). Do mesmo modo verifica-se de que forma as variáveis sociodemográficas afetam ou fragmentam a avaliação de desempenho. Este tema é pouco explorado em Portugal, contribuindo para o aprofundamento da compreensão da multidimensionalidade da perceção de justiça nas forças de vendas destas organizações. A pesquisa descritiva foi realizada por meio de levantamento, com assertivas extraídas da pesquisa de Sotomayor (2006), para investigar as perceções de justiça distributiva, processual, interpessoal e informacional. Os resultados obtidos neste estudo indicam perceção de justiça organizacional na avaliação de desempenho por parte da força de vendas em todas as quatro dimensões distintas, apresentando a justiça procedimental um score de destaque, seguida da justiça distributiva. A organização B apresenta níveis mais elevados de prevalência de justiça distributiva, procedimental, interpessoal e informacional face à organização A. No que respeita à relação entre as variáveis sociodemográficas e à perceção de justiça na avaliação de desempenho, a mesma não é suportada nesta amostra não comprometendo a perceção de justiça. Do presente estudo retira-se o quanto determinante nas organizações que envolvem forças de venda são as políticas e práticas de gestão de desempenho, acautelando o sucesso estratégico sustentado da organização, percecionando que com estas a organização as valoriza e cuida do seu bem-estar, com perceções positivas de justiça na avaliação de desempenho. / In a competitive world with profound changes in work processes and performance needs arise new forms of evaluation and professional skills. These needs foster an environment not always fair and healthy, something essential for the well-being of an organization. This study aims to verify perception of organizational justice evaluation of performance of two companies in the pharmaceutical sector (company A and company B). Likewise there is how sociodemographic varia-bles affect or fragmenting the performance appraisal. This theme is underexplored in Portugal, contributing to the deepening of understanding of the multidimensionality of perception of justice in these organizations sales forces. The descriptive research was conducted through survey, with the extracted assertions Sotomayor research (2006), to investigate the perceptions of distributive, procedural, interpersonal and informational justice. The results of this study indicate organizational justice perception on performance appraisal by the sales force in all four different dimensions, presenting procedural justice an outstanding score, followed by distributive justice. The organization B has higher levels of prevalence of distributive justice, procedural, interpersonal and informational face the organization A. As regards the relationship between sociodemographic variables and the perception of justice in performance appraisal, it is not supported in this sample does not compromising the perception of justice. The present study withdraws how decisive in organizations involving sales force policies and performance management practices, cautioning sustained strategic success of the organization, perceived that the organization values and cares for your well-being, with positive perceptions justice in performance appraisal.
Puzzle game online running on a bidirectional communication framework based on Node.js and Socket.io
Resumo:
Máster Universitario en Ingeniería Informática (Escuela Universitaria de Informática)
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
This study examines the factors that influence public managers in the adoption of advanced practices related to Information Security Management. This research used, as the basis of assertions, Security Standard ISO 27001:2005 and theoretical model based on TAM (Technology Acceptance Model) from Venkatesh and Davis (2000). The method adopted was field research of national scope with participation of eighty public administrators from states of Brazil, all of them managers and planners of state governments. The approach was quantitative and research methods were descriptive statistics, factor analysis and multiple linear regression for data analysis. The survey results showed correlation between the constructs of the TAM model (ease of use, perceptions of value, attitude and intention to use) and agreement with the assertions made in accordance with ISO 27001, showing that these factors influence the managers in adoption of such practices. On the other independent variables of the model (organizational profile, demographic profile and managers behavior) no significant correlation was identified with the assertions of the same standard, witch means the need for expansion researches using such constructs. It is hoped that this study may contribute positively to the progress on discussions about Information Security Management, Adoption of Safety Standards and Technology Acceptance Model
Resumo:
Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
Elasticity is one of the most known capabilities related to cloud computing, being largely deployed reactively using thresholds. In this way, maximum and minimum limits are used to drive resource allocation and deallocation actions, leading to the following problem statements: How can cloud users set the threshold values to enable elasticity in their cloud applications? And what is the impact of the applications load pattern in the elasticity? This article tries to answer these questions for iterative high performance computing applications, showing the impact of both thresholds and load patterns on application performance and resource consumption. To accomplish this, we developed a reactive and PaaS-based elasticity model called AutoElastic and employed it over a private cloud to execute a numerical integration application. Here, we are presenting an analysis of best practices and possible optimizations regarding the elasticity and HPC pair. Considering the results, we observed that the maximum threshold influences the application time more than the minimum one. We concluded that threshold values close to 100% of CPU load are directly related to a weaker reactivity, postponing resource reconfiguration when its activation in advance could be pertinent for reducing the application runtime.