924 resultados para Got To Give It Up
Resumo:
The SystemVerilog implementation of the Open Verification Methodology (OVM) is exercised on an 8b/10b RTL open core design in the hope of being a simple yet complete exercise to expose the key features of OVM. Emphasis is put onto the actual usage of the verification components rather than a complete verification flow aiming at being of help to readers unfamiliar with OVM seeking to apply the methodology to their own designs. A link that takes you to the complete code is given to reinforce this aim. We found the methodology easy to use but intimidating at first glance specially for someone with little experience in object oriented programming. However it is clear to see the flexibility, portability and reusability of verification code once you manage to give some first steps.
Resumo:
This paper considers left-invariant control systems defined on the Lie groups SU(2) and SO(3). Such systems have a number of applications in both classical and quantum control problems. The purpose of this paper is two-fold. Firstly, the optimal control problem for a system varying on these Lie Groups, with cost that is quadratic in control is lifted to their Hamiltonian vector fields through the Maximum principle of optimal control and explicitly solved. Secondly, the control systems are integrated down to the level of the group to give the solutions for the optimal paths corresponding to the optimal controls. In addition it is shown here that integrating these equations on the Lie algebra su(2) gives simpler solutions than when these are integrated on the Lie algebra so(3).
Resumo:
In this paper the meteorological processes responsible for transporting tracer during the second ETEX (European Tracer EXperiment) release are determined using the UK Met Office Unified Model (UM). The UM predicted distribution of tracer is also compared with observations from the ETEX campaign. The dominant meteorological process is a warm conveyor belt which transports large amounts of tracer away from the surface up to a height of 4 km over a 36 h period. Convection is also an important process, transporting tracer to heights of up to 8 km. Potential sources of error when using an operational numerical weather prediction model to forecast air quality are also investigated. These potential sources of error include model dynamics, model resolution and model physics. In the UM a semi-Lagrangian monotonic advection scheme is used with cubic polynomial interpolation. This can predict unrealistic negative values of tracer which are subsequently set to zero, and hence results in an overprediction of tracer concentrations. In order to conserve mass in the UM tracer simulations it was necessary to include a flux corrected transport method. Model resolution can also affect the accuracy of predicted tracer distributions. Low resolution simulations (50 km grid length) were unable to resolve a change in wind direction observed during ETEX 2, this led to an error in the transport direction and hence an error in tracer distribution. High resolution simulations (12 km grid length) captured the change in wind direction and hence produced a tracer distribution that compared better with the observations. The representation of convective mixing was found to have a large effect on the vertical transport of tracer. Turning off the convective mixing parameterisation in the UM significantly reduced the vertical transport of tracer. Finally, air quality forecasts were found to be sensitive to the timing of synoptic scale features. Errors in the position of the cold front relative to the tracer release location of only 1 h resulted in changes in the predicted tracer concentrations that were of the same order of magnitude as the absolute tracer concentrations.
Resumo:
Developing and implementing a technology for Facilities Management (FM) can be a complex process. This is particularly the case when a technology impacts on an organisation as a whole. There are often a number of relevant actors, internal and external to FM, who should be engaged. This engagement is guided by the strategy of the organisation which is led by top management decisions. Indeed, it is top management who have the final decision to implement a technology. Actors of top management and other relevant actors will have their own discourses toward the implementation of the technology based on how they foresee the technology befittingly benefitting the organisation. This paper examines actors who play a relevant and necessary part in supporting and implementing a technology to FM. It examines how an actor’s discourse toward the project inhibits or speeds up the implementation of a technology. The methods used for this paper are based on a two year case study in a FM department where a technology development was observed and interviews with key participants were conducted. Critical discourse analysis is used to analyse the data. Prominent discourses that emerge from the data are emphasised during the process of introducing the technology. This research moves beyond focusing purely on project successes but examines the difficulties and the hurdles that must be overcome to reach a successful technology implementation.
Resumo:
The role of the academic in the built environment seems generally to be not well understood or articulated. While this problem is not unique to our field, there are plenty of examples in a wide range of academic disciplines where the academic role has been fully articulated. But built environment academics have tended not to look beyond their own literature and their own vocational context in trying to give meaning to their academic work. The purpose of this keynote presentation is to explore the context of academic work generally and the connections between education, research and practice in the built environment, specifically. By drawing on ideas from the sociology of the professions, the role of universities, and the fundamentals of social science research, a case is made that helps to explain the kind of problems that routinely obstruct academic progress in our field. This discussion reveals that while there are likely to be great weaknesses in much of what is published and taught in the built environment, it is not too great a stretch to provide a more robust understanding and a good basis for developing our field in a way that would enable us collectively to make a major contribution to theory-building, theory-testing and to make a good stab at tackling some of the problems facing society at large. There is no reason to disregard the fundamental academic disciplines that underpin our knowledge of the built environment. If we contextualise our work in these more fundamental disciplines, there is every reason to think that we can have a much greater impact that we have experienced to date.
Resumo:
In the earth sciences, data are commonly cast on complex grids in order to model irregular domains such as coastlines, or to evenly distribute grid points over the globe. It is common for a scientist to wish to re-cast such data onto a grid that is more amenable to manipulation, visualization, or comparison with other data sources. The complexity of the grids presents a significant technical difficulty to the regridding process. In particular, the regridding of complex grids may suffer from severe performance issues, in the worst case scaling with the product of the sizes of the source and destination grids. We present a mechanism for the fast regridding of such datasets, based upon the construction of a spatial index that allows fast searching of the source grid. We discover that the most efficient spatial index under test (in terms of memory usage and query time) is a simple look-up table. A kd-tree implementation was found to be faster to build and to give similar query performance at the expense of a larger memory footprint. Using our approach, we demonstrate that regridding of complex data may proceed at speeds sufficient to permit regridding on-the-fly in an interactive visualization application, or in a Web Map Service implementation. For large datasets with complex grids the new mechanism is shown to significantly outperform algorithms used in many scientific visualization packages.
Resumo:
Background: Child social anxiety is common, and predicts later emotional and academic impairment. Offspring of socially anxious mothers are at increased risk. It is important to establish whether individual vulnerability to disorder can be identified in young children. Method: The responses of 4.5 year-old children of mothers with social phobia (N = 62) and non-anxious mothers (N = 60) were compared, two months before school entry, using a Doll Play (DP) procedure focused on the social challenge of starting school. DP responses were examined in relation to teacher reports of anxious-depressed symptoms and social worries at the end of the child’s first school term. The role of earlier child behavioral inhibition and attachment, assessed at 14 months, was also considered. Results: Compared to children of non-anxious mothers, children of mothers with social phobia were significantly more likely to give anxiously negative responses in their school DP (OR = 2.57). In turn, negative DP predicted teacher reported anxious-depressed and social worry problems. There were no effects of infant behavioral inhibition or attachment. Conclusion: Vulnerability in young children at risk of anxiety can be identified using Doll Play narratives.
Resumo:
We hypothesized that higher doses of fluoroquinolones for a shorter duration could maintain efficacy (as measured by reduction in bacterial count) while reducing selection in chickens of bacteria with reduced susceptibility. Chicks were infected with Salmonella enterica serovar Typhimurium DT104 and treated 1 week later with enrofloxacin at the recommended dose for 5 days (water dose adjusted to give 10 mg/kg of body weight of birds or equivalence, i.e., water at 50 ppm) or at 2.5 or 5 times the recommended dose for 2 days or 1 day, respectively. The dose was delivered continuously (ppm) or pulsed in the water (mg/kg) or by gavage (mg/kg). In vitro in sera, increasing concentrations of 0.5 to 8 mu g/ml enrofloxacin correlated with increased activity. In vivo, the efficacy of the 1-day treatment was significantly less than that of the 2- and 5-day treatments. The 2-day treatments showed efficacy similar to that of the 5-day treatment in all but one repeat treatment group and significantly (P < 0.01) reduced the Salmonella counts. Dosing at 2.5x the recommended dose and pulsed dosing both increased the peak antibiotic concentrations in cecal contents, liver, lung, and sera as determined by high-pressure liquid chromatography. There was limited evidence that shorter treatment regimens (in particular the 1-day regimen) selected for fewer strains with reduced susceptibility. In conclusion, the 2-day treatment would overall require a shorter withholding time than the 5-day treatment and, in view of the increased peak antibiotic concentrations, may give rise to improved efficacy, in particular for treating respiratory and systemic infections. However, it would be necessary to validate the 2-day regimen in a field situation and in particular against respiratory and systemic infections to validate or refute this hypothesis.
Resumo:
This paper examines the interplay and tension between housing law and policy and property law, in the specific context of the right to buy (RTB). It focuses on funding arrangements between the RTB tenant and another party. It first examines how courts determine the parties' respective entitlements in the home, highlighting the difficulty of categorising, under traditional property law principles, a contribution in the form of the statutory discount conferred on the RTB tenant. Secondly, it considers possible exploitation of the RTB scheme, both at the macro level of exploitation of the policy underpinning the legislation and, at the micro level, of exploitation of the tenant. The measures contained in the Housing Act 2004 intended to curb exploitation of the RTB are analysed to determine what can be considered to be legitimate and illegitimate uses of the scheme. It is argued that, despite the government's implicit approval, certain funding arrangements by non-resident relatives fail to give effect to the spirit of the scheme.
Resumo:
We present here a straightforward method which can be used to obtain a quantitative indication of an individual research output for an academic. Different versions, selections and options are presented to enable a user to easily calculate values both for stand-alone papers and overall for the collection of outputs for a person. The procedure is particularly useful as a metric to give a quantitative indication of the research output of a person over a time window. Examples are included to show how the method works in practice and how it compares to alternative techniques.
Resumo:
I summarise certain aspects of Paul Feyerabend’s account of the development of Western rationalism, show the ways in which that account is supposed to run up against an alternative, that of Karl Popper, and then try to give a preliminary comparison of the two. My interest is primarily in whether what Feyerabend called his ‘story’ constitutes a possible history of our epistemic concepts and their trajectory. I express some grave reservations about that story, and about Feyerabend’s framework, finding Popper’s views less problematic here. However, I also suggest that one important aspect of Feyerabend’s material, his treatment of religious belief, can be given an interpretation which makes it tenable, and perhaps preferable to a Popperian approach.
Resumo:
This paper used a qualitative technique from a social scientific perspective, a model based on Hewitt and his theory of the self-concept. The purpose of this study was to investigate why some elite athletes experience troublesome periods after their career ending. Interviews were performed with five elite athletes with varying experiences after career ending. The length of the elite athlete careers vary between 7 to 17 years. Two groups were made based upon experiences after career ending. Group 1 had experienced problems, for example suicide tendency, and group 2 had not. The result shows that a troublesome period can come up independently of career ending. The self-concept was investigated during the career and further different kind of variables which could affect the self as training and competition, social relations both before and after termination from sport. Result indicates that an individual in group 2 who has a high complexity in the self-concept based upon significant others outside the elite sport during the career copes with the new situation after career ending much better than group 1 who have not. To build up the self based only upon significant others in the elite sport seems to give expression in a strengthen self. Intensity in training and competition did not have a connection with a troublesome period after retirement from sport but it could prevent establishing contact with others outside the elite sport and reduce a high complexity in the self-concept. The result further shows that elite athletes who practise an individual sport trains in to a greater extent than elite athletes in a seasonal sport. Result also shows that practising a sport with one day off a week, contributes to better opportunities for developing a higher complexity in the self-concept. Suspicions has also rouse that practising an elite sport on the highest level can lead to extensive focusing that further leads to social isolation from individuals outside elite sport. To build up the self upon significant others outside elite sport during the career and keep in touch with significant others from elite sport after the career seems to be the key to avoid problems after the career ending. Suggestions about further investigations are made to see if medial exposure and status can affect the self.
Resumo:
This Thesis Work will concentrate on a very interesting problem, the Vehicle Routing Problem (VRP). In this problem, customers or cities have to be visited and packages have to be transported to each of them, starting from a basis point on the map. The goal is to solve the transportation problem, to be able to deliver the packages-on time for the customers,-enough package for each Customer,-using the available resources- and – of course - to be so effective as it is possible.Although this problem seems to be very easy to solve with a small number of cities or customers, it is not. In this problem the algorithm have to face with several constraints, for example opening hours, package delivery times, truck capacities, etc. This makes this problem a so called Multi Constraint Optimization Problem (MCOP). What’s more, this problem is intractable with current amount of computational power which is available for most of us. As the number of customers grow, the calculations to be done grows exponential fast, because all constraints have to be solved for each customers and it should not be forgotten that the goal is to find a solution, what is best enough, before the time for the calculation is up. This problem is introduced in the first chapter: form its basics, the Traveling Salesman Problem, using some theoretical and mathematical background it is shown, why is it so hard to optimize this problem, and although it is so hard, and there is no best algorithm known for huge number of customers, why is it a worth to deal with it. Just think about a huge transportation company with ten thousands of trucks, millions of customers: how much money could be saved if we would know the optimal path for all our packages.Although there is no best algorithm is known for this kind of optimization problems, we are trying to give an acceptable solution for it in the second and third chapter, where two algorithms are described: the Genetic Algorithm and the Simulated Annealing. Both of them are based on obtaining the processes of nature and material science. These algorithms will hardly ever be able to find the best solution for the problem, but they are able to give a very good solution in special cases within acceptable calculation time.In these chapters (2nd and 3rd) the Genetic Algorithm and Simulated Annealing is described in details, from their basis in the “real world” through their terminology and finally the basic implementation of them. The work will put a stress on the limits of these algorithms, their advantages and disadvantages, and also the comparison of them to each other.Finally, after all of these theories are shown, a simulation will be executed on an artificial environment of the VRP, with both Simulated Annealing and Genetic Algorithm. They will both solve the same problem in the same environment and are going to be compared to each other. The environment and the implementation are also described here, so as the test results obtained.Finally the possible improvements of these algorithms are discussed, and the work will try to answer the “big” question, “Which algorithm is better?”, if this question even exists.
Resumo:
This research is based on consumer complaints with respect to recently purchased consumer electronics. This research document will investigate the instances of development and device management as a tool used to aid consumer and manage consumer’s mobile products in order to resolve issues in or before the consumers is aware one exists. The problem at the present time is that mobile devices are becoming very advanced pieces of technology, and not all manufacturers and network providers have kept up the support element of End users. As such, the subject of the research is to investigate how device management could possibly be used as a method to promote research and development of mobile devices, and provide a better experience for the consumer. The wireless world is becoming increasingly complex as revenue opportunities are driven by new and innovative data services. We can no longer expect the customer to have the knowledge or ability to configure their own device. Device Management platforms can address the challenges of device configuration and support through new enabling technologies. Leveraging these technologies will allow a network operator to reduce the cost of subscriber ownership, drive increased ARPU (Average Revenue per User) by removing barriers to adoption, reduce churn by improving the customer experience and increase customer loyalty. DM technologies provide a flexible and powerful management method but are managing the same device features that have historically been configured manually through call centers or by the end user making changes directly on the device. For this reason DM technologies must be treated as part of a wider support solution. The traditional requirement for discovery, fault finding, troubleshooting and diagnosis are still as relevant with DM as they are in the current human support environment yet the current generation of solutions do little to address this problem. In the deployment of an effective Device Management solution the network operator must consider the integration of the DM platform, interfacing with many areas of the business, supported by knowledge of the relationship between devices, applications, solutions and services maintained on an ongoing basis. Complementing the DM solution with published device information, setup guides, training material and web based tools will ensure the quality of the customer experience, ensuring that problems are completely resolved, driving data usage by focusing customer education on the use of the wireless service In this way device management becomes a tool used both internally within the network or device vendor and by the customer themselves, with each user empowered to effectively manage the device without any prior knowledge or experience, confident that changes they apply will be relevant, accurate, stable and compatible. The value offered by an effective DM solution with an expert knowledge service will become a significant differentiator for the network operator in an ever competitive wireless market. This research document is intended to highlight some of the issues the industry faces as device management technologies become more prevalent, and offers some potential solutions to simplify the increasingly complex task of managing devices on the network, where device management can be used as a tool to aid customer relations and manage customer’s mobile products in order to resolve issues before the user is aware one exists. The research is broken down into the following, Customer Relationship Management, Device management, the role of knowledge with the DM, Companies that have successfully implemented device management, and the future of device management and CRM. And it also consists of questionnaires aimed at technical support agents and mobile device users. Interview was carried out with CRM managers within support centre to further the evidence gathered. To conclude, the document is to consider the advantages and disadvantages of device management and attempt to determine the influence it will have over customer support centre, and what methods could be used to implement it.
Resumo:
The administration of clinical practice placements for nursing students is a highly complex and information driven task. This demonstration is intended to give insight into the web based system KliPP (a Swedish acronym for Clinical Practice Planning) and to discuss the possibilities for further development and use.