901 resultados para Exception Handling. Exceptional Behavior. Exception Policy. Software Testing. Design Rules
Resumo:
Tese (doutorado)—Universidade de Brasília, Centro de Desenvolvimento Sustentável, 2015.
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
All structures are subjected to various loading conditions and combinations. For offshore structures, these loads include permanent loads, hydrostatic pressure, wave, current, and wind loads. Typically, sea environments in different geographical regions are characterized by the 100-year wave height, surface currents, and velocity speeds. The main problems associated with the commonly used, deterministic method is the fact that not all waves have the same period, and that the actual stochastic nature of the marine environment is not taken into account. Offshore steel structure fatigue design is done using the DNVGL-RP-0005:2016 standard which takes precedence over the DNV-RP-C203 standard (2012). Fatigue analysis is necessary for oil and gas producing offshore steel structures which were first constructed in the Gulf of Mexico North Sea (the 1930s) and later in the North Sea (1960s). Fatigue strength is commonly described by S-N curves which have been obtained by laboratory experiments. The rapid development of the Offshore wind industry has caused the exploration into deeper ocean areas and the adoption of new support structural concepts such as full lattice tower systems amongst others. The optimal design of offshore wind support structures including foundation, turbine towers, and transition piece components putting into consideration, economy, safety, and even the environment is a critical challenge. In this study, fatigue design challenges of transition pieces from decommissioned platforms for offshore wind energy are proposed to be discussed. The fatigue resistance of the material and structural components under uniaxial and multiaxial loading is introduced with the new fatigue design rules whilst considering the combination of global and local modeling using finite element analysis software programs.
Resumo:
Architectures based on Coordinated Atomic action (CA action) concepts have been used to build concurrent fault-tolerant systems. This conceptual model combines concurrent exception handling with action nesting to provide a general mechanism for both enclosing interactions among system components and coordinating forward error recovery measures. This article presents an architectural model to guide the formal specification of concurrent fault-tolerant systems. This architecture provides built-in Communicating Sequential Processes (CSPs) and predefined channels to coordinate exception handling of the user-defined components. Hence some safety properties concerning action scoping and concurrent exception handling can be proved by using the FDR (Failure Divergence Refinement) verification tool. As a result, a formal and general architecture supporting software fault tolerance is ready to be used and proved as users define components with normal and exceptional behaviors. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Tämän tutkielman tavoitteena on tarkastella Kiinan osakemarkkinoiden tehokkuutta ja random walk -hypoteesin voimassaoloa. Tavoitteena on myös selvittää esiintyykö viikonpäiväanomalia Kiinan osakemarkkinoilla. Tutkimusaineistona käytetään Shanghain osakepörssin A-sarjan,B-sarjan ja yhdistelmä-sarjan ja Shenzhenin yhdistelmä-sarjan indeksien päivittäisiä logaritmisoituja tuottoja ajalta 21.2.1992-30.12.2005 sekä Shenzhenin osakepörssin A-sarjan ja B-sarjan indeksien päivittäisiä logaritmisoituja tuottoja ajalta 5.10.1992-30.12.2005. Tutkimusmenetelminä käytetään neljä tilastollista menetelmää, mukaan lukien autokorrelaatiotestiä, epäparametrista runs-testiä, varianssisuhdetestiä sekä Augmented Dickey-Fullerin yksikköjuuritestiä. Viikonpäiväanomalian esiintymistä tutkitaan käyttämällä pienimmän neliösumman menetelmää (OLS). Testejä tehdään sekä koko aineistolla että kolmella erillisellä ajanjaksolla. Tämän tutkielman empiiriset tulokset tukevat aikaisempia tutkimuksia Kiinan osakemarkkinoiden tehottomuudesta. Lukuun ottamatta yksikköjuuritestien saatuja tuloksia, autokorrelaatio-, runs- ja varianssisuhdetestien perusteella random walk-hypoteesi hylättiin molempien Kiinan osakemarkkinoiden kohdalla. Tutkimustulokset osoittavat, että molemmilla osakepörssillä B-sarjan indeksien käyttäytyminenon ollut huomattavasti enemmän random walk -hypoteesin vastainen kuin A-sarjan indeksit. Paitsi B-sarjan markkinat, molempien Kiinan osakemarkkinoiden tehokkuus näytti myös paranevan vuoden 2001 markkinabuumin jälkeen. Tutkimustulokset osoittavat myös viikonpäiväanomalian esiintyvän Shanghain osakepörssillä, muttei kuitenkaan Shenzhenin osakepörssillä koko tarkasteluajanjaksolla.
Resumo:
The Kraft pulping process is the dominant chemical pulping process in the world. Roughly 195 million metric tons of black liquor are produced annually as a by-product from the Kraft pulping process. Black liquor consists of spent cooking chemicals and dissolved organics from the wood and can contain up to 0.15 wt% nitrogen on dry solids basis. The cooking chemicals from black liquor are recovered in a chemical recovery cycle. Water is evaporated in the first stage of the chemical recovery cycle, so the black liquor has a dry solids content of 65-85% prior to combustion. During combustion of black liquor, a portion of the black liquor nitrogen is volatilized, finally forming N2 or NO. The rest of the nitrogen remains in the char as char nitrogen. During char conversion, fixed carbon is burned off leaving the pulping chemicals as smelt, and the char nitrogen forms mostly smelt nitrogen (cyanate, OCN-). Smelt exits the recovery boiler and is dissolved in water. The cyanate from smelt decomposes in the presence of water, forming NH3, which causes nitrogen emissions from the rest of the chemical recovery cycle. This thesis had two focuses: firstly, to determine how the nitrogen chemistry in the recovery boiler is affected by modification of black liquor; and secondly, to find out what causes cyanate formation during thermal conversion, and which parameters affect cyanate formation and decomposition during thermal conversion of black liquor. The fate of added biosludge nitrogen in chemical recovery was determined in Paper I. The added biosludge increased the nitrogen content of black liquor. At the pulp mill, the added biosludge did not increase the NO formation in the recovery boiler, but instead increased the amount of cyanate in green liquor. The increased cyanate caused more NH3 formation, which increased the NCG boiler’s NO emissions. Laboratory-scale experiments showed an increase in both NO and cyanate formation after biosludge addition. Black liquor can be modified, for example by addition of a solid biomass to increase the energy density of black liquor, or by separation of lignin from black liquor by precipitation. The precipitated lignin can be utilized in the production of green chemicals or as a fuel. In Papers II and III, laboratory-scale experiments were conducted to determine the impact of black liquor modification on NO and cyanate formation. Removal of lignin from black liquor reduced the nitrogen content of the black liquor. In most cases NO and cyanate formation decreased with increasing lignin removal; the exception was NO formation from lignin lean soda liquors. The addition of biomass to black liquor resulted in a higher nitrogen content fuel mixture, due to the higher nitrogen content of biomass compared to black liquor. More NO and cyanate were formed from the fuel mixtures than from pure black liquor. The increased amount of formed cyanate led to the hypothesis that black liquor is catalytically active and converts a portion of the nitrogen in the mixed fuel to cyanate. The mechanism behind cyanate formation during thermal conversion of black liquor was not clear before this thesis. Paper IV studies the cyanate formation of alkali metal loaded fuels during gasification in a CO2 atmosphere. The salts K2CO3, Na2CO3, and K2SO4 all promoted char nitrogen to cyanate conversion during gasification, while KCl and CaCO3 did not. It is now assumed that cyanate is formed when alkali metal carbonate or an active intermediate of alkali metal carbonate (e.g. -CO2K) reacts with the char nitrogen forming cyanate. By testing different fuels (bark, peat, and coal), each of which had a different form of organic nitrogen, it was concluded that the form of organic nitrogen in char also has an impact on cyanate formation. Cyanate can be formed during pyrolysis of black liquor, but at temperatures 900°C or above, the formed cyanate will decompose. Cyanate formation in gasifying conditions with different levels of CO2 in the atmosphere was also studied. Most of the char nitrogen was converted to cyanate during gasification at 800-900°C in 13-50% CO2 in N2, and only 5% of the initial fuel nitrogen was converted to NO during char conversion. The formed smelt cyanate was stable at 800°C 13% CO2, while it decomposed at 900°C 13% CO2. The cyanate decomposition was faster at higher temperatures and in oxygen-containing atmospheres than in an inert atmosphere. The presence of CO2 in oxygencontaining atmospheres slowed down the decomposition of cyanate. This work will provide new information on how modification of black liquor affects the nitrogen chemistry during thermal conversion of black liquor and what causes cyanate formation during thermal conversion of black liquor. The formation and decomposition of cyanate was studied in order to provide new data, which would be useful in modeling of nitrogen chemistry in the recovery boiler.
Resumo:
Cette thèse montre comment s’est constituée la figure du génie en France au cours des XVIe, XVIIe et XVIIIe siècles, en mettant en évidence les paradoxes qui lui ont permis de devenir l’une des notions fondamentales de la modernité. Cette analyse s’articule autour de trois axes principaux. D’abord, il s’agit d’interroger les circonstances de l’invention du terme « génie » dans la langue française, en insistant sur son bagage culturel gréco-latin. La notion de génie apparaît alors comme intimement liée au génie de la langue française et à son histoire. Ensuite, l’analyse s’intéresse au rôle que la notion de génie joue dans le cadre régulateur de la théorie poétique à la fin du XVIIe siècle. Le génie, qui se définit alors comme une aptitude naturelle à l’exercice d’une régularité normée du faire, n’a cependant de valeur que si cette régularité est transgressée, dépassée. Cette relation fait apparaître le paradoxe social que représente le génie, considéré à la fois comme exceptionnel et exemplaire. Ce paradoxe du génie est ensuite analysé dans le cadre du développement des théories esthétiques au XVIIIe siècle, fondées sur une expérience communautarisante du beau. Cette problématique est étudiée au regard de l’intérêt des philosophes sensualistes pour le problème que constitue le génie, en particulier quant aux mécanismes de l’invention et de la découverte. À l’issue de ce parcours, il apparaît que le génie est à la fois problématique pour les théories qui tentent de le circonscrire et unificateur pour la communauté qu’il permet d’illustrer.
Resumo:
Organizations introduce acceptable use policies to deter employee computer misuse. Despite the controlling, monitoring and other forms of interventions employed, some employees misuse the organizational computers to carry out their personal work such as sending emails, surfing internet, chatting, playing games etc. These activities not only waste productive time of employees but also bring a risk to the organization. A questionnaire was administrated to a random sample of employees selected from large and medium scale software development organizations, which measured the work computer misuse levels and the factors that influence such behavior. The presence of guidelines provided no evidence of significant effect on the level of employee computer misuse. Not having access to Internet /email away from work and organizational settings were identified to be the most significant influences of work computer misuse.
Resumo:
It is a known fact that some employees misuse the organizational computers to do their personal work such as sending emails, surfing the Internet, chatting, playing games. These activities not only waste productive time of employees but also bring a risk factor to the organization. This affects organizations in the software industry very much as almost all of their employees are connected to the Internet throughout them day./ By introducing an Acceptable Use Policy (AUP) for an organization, it is believed that the computer misuse by its employees could be reduced. In many countries Acceptable Use Policies are used and they have been studied with various perspectives. In Sri Lankan context research on these areas are scarce. This research explored the situation in Sri Lanka with respect to AUPs and their effectiveness./ A descriptive study was carried out to identify the large and medium scale software development organizations that had implemented computer usage guidelines for employees. A questionnaire was used to gather information regarding employee’s usual computer usage behavior. Stratified random sampling was employed to draw a representative sample from the population./ Majority of the organizations have not employed a written guideline on acceptable use of work computers. The study results did not provide evidence to conclude that the presence or non presence of an AUP has a significant difference in computer use behaviors of employees. A significant negative correlation was observed between level of awareness about AUP and misuse. Access to the Internet and organizational settings were identified as significant factors that influence employee computer misuse behavior.
Resumo:
The objective of this paper is to test for optimality of consumption decisions at the aggregate level (representative consumer) taking into account popular deviations from the canonical CRRA utility model rule of thumb and habit. First, we show that rule-of-thumb behavior in consumption is observational equivalent to behavior obtained by the optimizing model of King, Plosser and Rebelo (Journal of Monetary Economics, 1988), casting doubt on how reliable standard rule-of-thumb tests are. Second, although Carroll (2001) and Weber (2002) have criticized the linearization and testing of euler equations for consumption, we provide a deeper critique directly applicable to current rule-of-thumb tests. Third, we show that there is no reason why return aggregation cannot be performed in the nonlinear setting of the Asset-Pricing Equation, since the latter is a linear function of individual returns. Fourth, aggregation of the nonlinear euler equation forms the basis of a novel test of deviations from the canonical CRRA model of consumption in the presence of rule-of-thumb and habit behavior. We estimated 48 euler equations using GMM, with encouraging results vis-a-vis the optimality of consumption decisions. At the 5% level, we only rejected optimality twice out of 48 times. Empirical-test results show that we can still rely on the canonical CRRA model so prevalent in macroeconomics: out of 24 regressions, we found the rule-of-thumb parameter to be statistically signi cant at the 5% level only twice, and the habit ƴ parameter to be statistically signi cant on four occasions. The main message of this paper is that proper return aggregation is critical to study intertemporal substitution in a representative-agent framework. In this case, we fi nd little evidence of lack of optimality in consumption decisions, and deviations of the CRRA utility model along the lines of rule-of-thumb behavior and habit in preferences represent the exception, not the rule.
Resumo:
This paper tests the optimality of consumption decisions at the aggregate level taking into account popular deviations from the canonical constant-relative-risk-aversion (CRRA) utility function model-rule of thumb and habit. First, based on the critique in Carroll (2001) and Weber (2002) of the linearization and testing strategies using euler equations for consumption, we provide extensive empirical evidence of their inappropriateness - a drawback for standard rule- of-thumb tests. Second, we propose a novel approach to test for consumption optimality in this context: nonlinear estimation coupled with return aggregation, where rule-of-thumb behavior and habit are special cases of an all encompassing model. We estimated 48 euler equations using GMM. At the 5% level, we only rejected optimality twice out of 48 times. Moreover, out of 24 regressions, we found the rule-of-thumb parameter to be statistically significant only twice. Hence, lack of optimality in consumption decisions represent the exception, not the rule. Finally, we found the habit parameter to be statistically significant on four occasions out of 24.
Resumo:
Mainstream programming languages provide built-in exception handling mechanisms to support robust and maintainable implementation of exception handling in software systems. Most of these modern languages, such as C#, Ruby, Python and many others, are often claimed to have more appropriated exception handling mechanisms. They reduce programming constraints on exception handling to favor agile changes in the source code. These languages provide what we call maintenance-driven exception handling mechanisms. It is expected that the adoption of these mechanisms improve software maintainability without hindering software robustness. However, there is still little empirical knowledge about the impact that adopting these mechanisms have on software robustness. This work addresses this gap by conducting an empirical study aimed at understanding the relationship between changes in C# programs and their robustness. In particular, we evaluated how changes in the normal and exceptional code were related to exception handling faults. We applied a change impact analysis and a control flow analysis in 100 versions of 16 C# programs. The results showed that: (i) most of the problems hindering software robustness in those programs are caused by changes in the normal code, (ii) many potential faults were introduced even when improving exception handling in C# code, and (iii) faults are often facilitated by the maintenance-driven flexibility of the exception handling mechanism. Moreover, we present a series of change scenarios that decrease the program robustness
Resumo:
In order to achieve to minimize car-based trips, transport planners have been particularly interested in understanding the factors that explain modal choices. In the transport modelling literature there has been an increasing awareness that socioeconomic attributes and quantitative variables are not sufficient to characterize travelers and forecast their travel behavior. Recent studies have also recognized that users? social interactions and land use patterns influence travel behavior, especially when changes to transport systems are introduced, but links between international and Spanish perspectives are rarely deal. In this paper, factorial and path analyses through a Multiple-Indicator Multiple-Cause (MIMIC) model are used to understand and describe the relationship between the different psychological and environmental constructs with social influence and socioeconomic variables. The MIMIC model generates Latent Variables (LVs) to be incorporated sequentially into Discrete Choice Models (DCM) where the levels of service and cost attributes of travel modes are also included directly to measure the effect of the transport policies that have been introduced in Madrid during the last three years in the context of the economic crisis. The data used for this paper are collected from a two panel smartphone-based survey (n=255 and 190 respondents, respectively) of Madrid.
Resumo:
As researchers and practitioners move towards a vision of software systems that configure, optimize, protect, and heal themselves, they must also consider the implications of such self-management activities on software reliability. Autonomic computing (AC) describes a new generation of software systems that are characterized by dynamically adaptive self-management features. During dynamic adaptation, autonomic systems modify their own structure and/or behavior in response to environmental changes. Adaptation can result in new system configurations and capabilities, which need to be validated at runtime to prevent costly system failures. However, although the pioneers of AC recognize that validating autonomic systems is critical to the success of the paradigm, the architectural blueprint for AC does not provide a workflow or supporting design models for runtime testing. ^ This dissertation presents a novel approach for seamlessly integrating runtime testing into autonomic software. The approach introduces an implicit self-test feature into autonomic software by tailoring the existing self-management infrastructure to runtime testing. Autonomic self-testing facilitates activities such as test execution, code coverage analysis, timed test performance, and post-test evaluation. In addition, the approach is supported by automated testing tools, and a detailed design methodology. A case study that incorporates self-testing into three autonomic applications is also presented. The findings of the study reveal that autonomic self-testing provides a flexible approach for building safe, reliable autonomic software, while limiting the development and performance overhead through software reuse. ^
Resumo:
Automated acceptance testing is the testing of software done in higher level to test whether the system abides by the requirements desired by the business clients by the use of piece of script other than the software itself. This project is a study of the feasibility of acceptance tests written in Behavior Driven Development principle. The project includes an implementation part where automated accep- tance testing is written for Touch-point web application developed by Dewire (a software consultant company) for Telia (a telecom company) from the require- ments received from the customer (Telia). The automated acceptance testing is in Cucumber-Selenium framework which enforces Behavior Driven Development principles. The purpose of the implementation is to verify the practicability of this style of acceptance testing. From the completion of implementation, it was concluded that all the requirements from customer in real world can be converted into executable specifications and the process was not at all time-consuming or difficult for a low-experienced programmer like the author itself. The project also includes survey to measure the learnability and understandability of Gherkin- the language that Cucumber understands. The survey consist of some Gherkin exam- ples followed with questions that include making changes to the Gherkin exam- ples. Survey had 3 parts: first being easy, second medium and third most difficult. Survey also had a linear scale from 1 to 5 to rate the difficulty level for each part of the survey. 1 stood for very easy and 5 for very difficult. Time when the partic- ipants began the survey was also taken in order to calculate the total time taken by the participants to learn and answer the questions. Survey was taken by 18 of the employers of Dewire who had primary working role as one of the programmer, tester and project manager. In the result, tester and project manager were grouped as non-programmer. The survey concluded that it is very easy and quick to learn Gherkin. While the participants rated Gherkin as very easy.