870 resultados para WEC deployment


Relevância:

10.00% 10.00%

Publicador:

Resumo:

On December 9, 2007, a 4.9 m(b) earthquake occurred in the middle of the Sao Francisco Craton, in a region with no known previous activity larger than 4 m(b). This event reached intensity VII MM (Modified Mercalli) causing the first fatal victim in Brazil. The activity had started in May 25, 2007 with a 3.5 magnitude event and continued for several months, motivating the deployment of a local 6-station network. A three week seismic quiescence was observed before the mainshock. Initial absolute hypocenters were calculated with best fitting velocity models and then relative locations were determined with hypoDD. The aftershock distribution indicates a 3 km long rupture for the mainshock. The fault plane solution, based on P-wave polarities and hypocentral trend, indicates a reverse faulting mechanism on a N30 degrees E striking plane dipping about 40 degrees to the SE. The rupture depth extends from about 0.3 to 1.2 km only. Despite the shallow depth of the mainshock, no surface feature could be correlated with the fault plane. Aeromagnetic data in the epicentral area show short-wavelength lineaments trending NNE-SSW to NE-SW which we interpret as faults and fractures in the craton basement beneath the surface limestone layer. We propose that the Caraibas-Itacarambi seismicity is probably associated with reactivation of these basement fractures and faults under the present E-W compressional stress field in this region of the South American Plate. (c) 2009 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Predictive performance evaluation is a fundamental issue in design, development, and deployment of classification systems. As predictive performance evaluation is a multidimensional problem, single scalar summaries such as error rate, although quite convenient due to its simplicity, can seldom evaluate all the aspects that a complete and reliable evaluation must consider. Due to this, various graphical performance evaluation methods are increasingly drawing the attention of machine learning, data mining, and pattern recognition communities. The main advantage of these types of methods resides in their ability to depict the trade-offs between evaluation aspects in a multidimensional space rather than reducing these aspects to an arbitrarily chosen (and often biased) single scalar measure. Furthermore, to appropriately select a suitable graphical method for a given task, it is crucial to identify its strengths and weaknesses. This paper surveys various graphical methods often used for predictive performance evaluation. By presenting these methods in the same framework, we hope this paper may shed some light on deciding which methods are more suitable to use in different situations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work models the carbon neutralization capacity of Brazil`s ethanol program since 1975. In addition to biofuel, we also assessed the mitigation potential of other energy products, such as, bioelectricity, and CO(2) emissions captured during fermentation of sugar cane`s juice. Finally, we projected the neutralization capacity of sugar cane`s bio-energy system over the next 32 years. The balance between several carbon stocks and flows was considered in the model, including the effects of land-use change. Our results show that the neutralization of the carbon released due to land-use change was attained only in 1992, and the maximum mitigation potential of the sugar cane sector was 128 tonnes Of CO(2) per ha in 2006. An ideal reconstitution of the deployment of the sugar cane sector, including the full exploitation of bio-electricity`s potential, plus the capture Of CO(2) released during fermentation, shows that the neutralization of land-use change emissions would have been achieved in 1988, and its mitigation potential would have been 390 tCO(2)/ha. Finally, forecasts of the sector up to 2039 shows that the mitigation potential in 2039 corresponds to 836 tCO(2)/ha, which corresponds to 5.51 kg Of CO(2) per liter of ethanol produced, or 55% above the negative emission level. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Internet protocol TV (IPTV) is predicted to be the key technology winner in the future. Efforts to accelerate the deployment of IPTV centralized model which is combined of VHO, encoders, controller, access network and Home network. Regardless of whether the network is delivering live TV, VOD, or Time-shift TV, all content and network traffic resulting from subscriber requests must traverse the entire network from the super-headend all the way to each subscriber's Set-Top Box (STB).IPTV services require very stringent QoS guarantees When IPTV traffic shares the network resources with other traffic like data and voice, how to ensure their QoS and efficiently utilize the network resources is a key and challenging issue. For QoS measured in the network-centric terms of delay jitter, packet losses and bounds on delay. The main focus of this thesis is on the optimized bandwidth allocation and smooth datatransmission. The proposed traffic model for smooth delivering video service IPTV network with its QoS performance evaluation. According to Maglaris et al [5] First, analyze the coding bit rate of a single video source. Various statistical quantities are derived from bit rate data collected with a conditional replenishment inter frame coding scheme. Two correlated Markov process models (one in discrete time and one incontinuous time) are shown to fit the experimental data and are used to model the input rates of several independent sources into a statistical multiplexer. Preventive control mechanism which is to be include CAC, traffic policing used for traffic control.QoS has been evaluated of common bandwidth scheduler( FIFO) by use fluid models with Markovian queuing method and analysis the result by using simulator andanalytically, Which is measured the performance of the packet loss, overflow and mean waiting time among the network users.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research is based on consumer complaints with respect to recently purchased consumer electronics. This research document will investigate the instances of development and device management as a tool used to aid consumer and manage consumer’s mobile products in order to resolve issues in or before the consumers is aware one exists. The problem at the present time is that mobile devices are becoming very advanced pieces of technology, and not all manufacturers and network providers have kept up the support element of End users. As such, the subject of the research is to investigate how device management could possibly be used as a method to promote research and development of mobile devices, and provide a better experience for the consumer. The wireless world is becoming increasingly complex as revenue opportunities are driven by new and innovative data services. We can no longer expect the customer to have the knowledge or ability to configure their own device. Device Management platforms can address the challenges of device configuration and support through new enabling technologies. Leveraging these technologies will allow a network operator to reduce the cost of subscriber ownership, drive increased ARPU (Average Revenue per User) by removing barriers to adoption, reduce churn by improving the customer experience and increase customer loyalty. DM technologies provide a flexible and powerful management method but are managing the same device features that have historically been configured manually through call centers or by the end user making changes directly on the device. For this reason DM technologies must be treated as part of a wider support solution. The traditional requirement for discovery, fault finding, troubleshooting and diagnosis are still as relevant with DM as they are in the current human support environment yet the current generation of solutions do little to address this problem. In the deployment of an effective Device Management solution the network operator must consider the integration of the DM platform, interfacing with many areas of the business, supported by knowledge of the relationship between devices, applications, solutions and services maintained on an ongoing basis. Complementing the DM solution with published device information, setup guides, training material and web based tools will ensure the quality of the customer experience, ensuring that problems are completely resolved, driving data usage by focusing customer education on the use of the wireless service In this way device management becomes a tool used both internally within the network or device vendor and by the customer themselves, with each user empowered to effectively manage the device without any prior knowledge or experience, confident that changes they apply will be relevant, accurate, stable and compatible. The value offered by an effective DM solution with an expert knowledge service will become a significant differentiator for the network operator in an ever competitive wireless market. This research document is intended to highlight some of the issues the industry faces as device management technologies become more prevalent, and offers some potential solutions to simplify the increasingly complex task of managing devices on the network, where device management can be used as a tool to aid customer relations and manage customer’s mobile products in order to resolve issues before the user is aware one exists. The research is broken down into the following, Customer Relationship Management, Device management, the role of knowledge with the DM, Companies that have successfully implemented device management, and the future of device management and CRM. And it also consists of questionnaires aimed at technical support agents and mobile device users. Interview was carried out with CRM managers within support centre to further the evidence gathered. To conclude, the document is to consider the advantages and disadvantages of device management and attempt to determine the influence it will have over customer support centre, and what methods could be used to implement it.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main objective of the thesis “Conceptual Product Development in Small Corporations” is by the use of a case study test the MFD™-method (Erixon G. , 1998) combined with PMM in a product development project. (Henceforth called MFD™/PMM-method). The MFD™/PMM-method used for documenting and controlling a product development project has since it was introduced been used in several industries and projects. The method has been proved to be a good way of working with the early stages of product development, however, there are almost only projects carried out on large industries which means that there are very few references to how the MFD™/PMM-method works in a small corporation. Therefore, was the case study in the thesis “Conceptual Product Development in Small Corporations” carried out in a small corporation to find out whether the MFD™/PMM-method also can be applied and used in such a corporation.The PMM was proposed in a paper presented at Delft University of Technology in Holland 1998 by the author and Gunnar Erixon. (See appended paper C: The chart of modular function deployment.) The title “The chart of modular function deployment” was later renamed as PMM, Product Management Map. (Sweden PreCAD AB, 2000). The PMM consists of a QFD-matrix linked to MIM (Module Indication Matrix) via a coupling matrix which makes it possible to make an unbroken chain from the customer domain to the designed product/modules. The PMM makes it easy to correct omissions made in creating new products and modules.In the thesis “Conceptual Product Development in Small Corporations” the universal MFD™/PMM-method has been adapted by the author to three models of product development; original-, evolutionary- and incremental development.The evolutionary adapted MFD™/PMM-method was tested as a case study at Atlings AB in the community Ockelbo. Atlings AB is a small corporation with a total number of 50 employees and an annual turnover of 9 million €. The product studied at the corporation was a steady rest for supporting long shafts in turning. The project team consisted of management director, a sales promoter, a production engineer, a design engineer and a workshop technician, the author as team leader and a colleague from Dalarna University as discussion partner. The project team has had six meetings.The project team managed to use MFD™ and to make a complete PMM of the studied product. There were no real problems occurring in the project work, on the contrary the team members worked very well in the group, having ideas how to improve the product. Instead, the challenge for a small company is how to work with the MFD™/PMM-method in the long run! If the MFD™/PMM-method is to be a useful tool for the company it needs to be used continuously and that requires financial and personnel resources. One way for the company to overcome the probable lack of recourses regarding capital and personnel is to establish a good cooperation with a regional university or a development centre.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Internet protocol TV (IPTV) is predicted to be the key technology winner in the future. Efforts to accelerate the deployment of IPTV centralized model which is combined of VHO, encoders, controller, access network and Home network. Regardless of whether the network is delivering live TV, VOD, or Time-shift TV, all content and network traffic resulting from subscriber requests must traverse the entire network from the super-headend all the way to each subscriber's Set-Top Box (STB). IPTV services require very stringent QoS guarantees When IPTV traffic shares the network resources with other traffic like data and voice, how to ensure their QoS and efficiently utilize the network resources is a key and challenging issue. For QoS measured in the network-centric terms of delay jitter, packet losses and bounds on delay. The main focus of this thesis is on the optimized bandwidth allocation and smooth data transmission. The proposed traffic model for smooth delivering video service IPTV network with its QoS performance evaluation. According to Maglaris et al [5] first, analyze the coding bit rate of a single video source. Various statistical quantities are derived from bit rate data collected with a conditional replenishment inter frame coding scheme. Two correlated Markov process models (one in discrete time and one in continuous time) are shown to fit the experimental data and are used to model the input rates of several independent sources into a statistical multiplexer. Preventive control mechanism which is to be including CAC, traffic policing used for traffic control. QoS has been evaluated of common bandwidth scheduler( FIFO) by use fluid models with Markovian queuing method and analysis the result by using simulator and analytically, Which is measured the performance of the packet loss, overflow and mean waiting time among the network users.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data mining can be used in healthcare industry to “mine” clinical data to discover hidden information for intelligent and affective decision making. Discovery of hidden patterns and relationships often goes intact, yet advanced data mining techniques can be helpful as remedy to this scenario. This thesis mainly deals with Intelligent Prediction of Chronic Renal Disease (IPCRD). Data covers blood, urine test, and external symptoms applied to predict chronic renal disease. Data from the database is initially transformed to Weka (3.6) and Chi-Square method is used for features section. After normalizing data, three classifiers were applied and efficiency of output is evaluated. Mainly, three classifiers are analyzed: Decision Tree, Naïve Bayes, K-Nearest Neighbour algorithm. Results show that each technique has its unique strength in realizing the objectives of the defined mining goals. Efficiency of Decision Tree and KNN was almost same but Naïve Bayes proved a comparative edge over others. Further sensitivity and specificity tests are used as statistical measures to examine the performance of a binary classification. Sensitivity (also called recall rate in some fields) measures the proportion of actual positives which are correctly identified while Specificity measures the proportion of negatives which are correctly identified. CRISP-DM methodology is applied to build the mining models. It consists of six major phases: business understanding, data understanding, data preparation, modeling, evaluation, and deployment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Single-page applications have historically been subject to strong market forces driving fast development and deployment in lieu of quality control and changeable code, which are important factors for maintainability. In this report we develop two functionally equivalent applications using AngularJS and React and compare their maintainability as defined by ISO/IEC 9126. AngularJS and React represent two distinct approaches to web development, with AngularJS being a general framework providing rich base functionality and React a small specialized library for efficient view rendering. The quality comparison was accomplished by calculating Maintainability Index for each application. Version control analysis was used to determine quality indicators during development and subsequent maintenance where new functionality was added in two steps.   The results show no major differences in maintainability in the initial applications. As more functionality is added the Maintainability Index decreases faster in the AngularJS application, indicating a steeper increase in complexity compared to the React application. Source code analysis reveals that changes in data flow requires significantly larger modifications of the AngularJS application due to its inherent architecture for data flow. We conclude that frameworks are useful when they facilitate development of known requirements but less so when applications and systems grow in size.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distributed systems comprised of autonomous self-interested entities require some sort of control mechanism to ensure the predictability of the interactions that drive them. This is certainly true in the aerospace domain, where manufacturers, suppliers and operators must coordinate their activities to maximise safety and profit, for example. To address this need, the notion of norms has been proposed which, when incorporated into formal electronic documents, allow for the specification and deployment of contract-driven systems. In this context, we describe the CONTRACT framework and architecture for exactly this purpose, and describe a concrete instantiation of this architecture as a prototype system applied to an aerospace aftercare scenario.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The behaviours of autonomous agents may deviate from those deemed to be for the good of the societal systems of which they are a part. Norms have therefore been proposed as a means to regulate agent behaviours in open and dynamic systems, where these norms specify the obliged, permitted and prohibited behaviours of agents. Regulation can effectively be achieved through use of enforcement mechanisms that result in a net loss of utility for an agent in cases where the agent's behaviour fails to comply with the norms. Recognition of compliance is thus crucial for achieving regulation. In this paper we propose a generic architecture for observation of agent behaviours, and recognition of these behaviours as constituting, or counting as, compliance or violation. The architecture deploys monitors that receive inputs from observers, and processes these inputs together with transition network representations of individual norms. In this way, monitors determine the fulfillment or violation status of norms. The paper also describes a proof of concept implementation and deployment of monitors in electronic contracting environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The behaviours of autonomous agents may deviate from those deemed to be for the good of the societal systems of which they are a part. Norms have therefore been proposed as a means to regulate agent behaviours in open and dynamic systems, where these norms specify the obliged, permitted and prohibited behaviours of agents. Regulation can effectively be achieved through use of enforcement mechanisms that result in a net loss of utility for an agent in cases where the agent’s behaviour fails to comply with the norms. Recognition of compliance is thus crucial for achieving regulation. In this paper we propose a generic architecture for observation of agent behaviours, and recognition of these behaviours as constituting, or counting as, compliance or violation. The architecture deploys monitors that receive inputs from observers, and processes these inputs together with transition network representations of individual norms. In this way, monitors determine the fulfillment or violation status of norms. The paper also describes a proof of concept implementation and deployment of monitors in electronic contracting environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the domain of aerospace aftermarkets, which often has long supply chains that feed into the maintenance of aircraft, contracts are used to establish agreements between aircraft operators and maintenance suppliers. However, violations at the bottom of the supply chain (part suppliers) can easily cascade to the top (aircraft operators), making it difficult to determine the source of the violation, and seek to address it. In this context, we have developed a global monitoring architecture that ensures the detection of norm violations and generates explanations for the origin of violations. In this paper, we describe the implementation and deployment of a global monitor in the aerospace domain of [8] and show how it generates explanations for violations within the maintenance supply chain. We show how these explanations can be used not only to detect violations at runtime, but also to uncover potential problems in contracts before their deployment, thus improving them.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This project involves the design and implementation of a global electronic tracking system intended for use by trans-oceanic vessels, using the technology of the U.S. Government's Global Positioning System (GPS) and a wireless connection to a networked computer. Traditional navigation skills are being replaced with highly accurate electronics. GPS receivers, computers, and mobile communication are becoming common among both recreational and commercial boaters. With computers and advanced communication available throughout the maritime world, information can be shared instantaneously around the globe. This ability to monitor one's whereabouts from afar can provide an increased level of safety and efficiency. Current navigation software seldom includes the capability of providing upto-the-minute navigation information for remote display. Remote access to this data will allow boat owners to track the progress of their boats, land-based organizations to monitor weather patterns and suggest course changes, and school groups to track the progress of a vessel and learn about navigation and science. The software developed in this project allows navigation information from a vessel to be remotely transmitted to a land-based server, for interpretation and deployment to remote users over the Internet. This differs from current software in that it allows the tracking of one vessel by multiple users and provides a means for two-way text messaging between users and the vesseI. Beyond the coastal coverage provided by cellular telephones, mobile communication is advancing rapidly. Current tools such as satellite telephones and single-sideband radio enable worldwide communications, including the ability to connect to the Internet. If current trends continue, portable global communication will be available at a reasonable price and Internet connections on boats will become more common.