964 resultados para Protocol model


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Key management has a fundamental role in secure communications. Designing and testing of key management protocols is tricky. These protocols must work flawlessly despite of any abuse. The main objective of this work was to design and implement a tool that helps to specify the protocol and makes it possible to test the protocol while it is still under development. This tool generates compile-ready java code from a key management protocol model. A modelling method for these protocols, which uses Unified Modeling Language (UML) was also developed. The protocol is modelled, exported as an XMI and read by the code generator tool. The code generator generates java code that is immediately executable with a test software after compilation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Drawing on his recent experience in the climate negotiations in Doha as an advisor and negotiator on a wide variety of issues, Andrei Marcu offers his assessment of the progress achieved in the two weeks of intensive talks. In spite of modest results, he describes the talks as an important and necessary step in the revolution, first ignited at the Montreal negotiations in 2005, that rejected the top-down Kyoto Protocol model in favour of a bottom-up climate change regime. In his view, the decisions taken in Doha enable the start of a new negotiating process aimed at delivering a new global climate agreement.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the last decade mobile wireless communications have witnessed an explosive growth in the user’s penetration rate and their widespread deployment around the globe. It is expected that this tendency will continue to increase with the convergence of fixed Internet wired networks with mobile ones and with the evolution to the full IP architecture paradigm. Therefore mobile wireless communications will be of paramount importance on the development of the information society of the near future. In particular a research topic of particular relevance in telecommunications nowadays is related to the design and implementation of mobile communication systems of 4th generation. 4G networks will be characterized by the support of multiple radio access technologies in a core network fully compliant with the Internet Protocol (all IP paradigm). Such networks will sustain the stringent quality of service (QoS) requirements and the expected high data rates from the type of multimedia applications to be available in the near future. The approach followed in the design and implementation of the mobile wireless networks of current generation (2G and 3G) has been the stratification of the architecture into a communication protocol model composed by a set of layers, in which each one encompasses some set of functionalities. In such protocol layered model, communications is only allowed between adjacent layers and through specific interface service points. This modular concept eases the implementation of new functionalities as the behaviour of each layer in the protocol stack is not affected by the others. However, the fact that lower layers in the protocol stack model do not utilize information available from upper layers, and vice versa, downgrades the performance achieved. This is particularly relevant if multiple antenna systems, in a MIMO (Multiple Input Multiple Output) configuration, are implemented. MIMO schemes introduce another degree of freedom for radio resource allocation: the space domain. Contrary to the time and frequency domains, radio resources mapped into the spatial domain cannot be assumed as completely orthogonal, due to the amount of interference resulting from users transmitting in the same frequency sub-channel and/or time slots but in different spatial beams. Therefore, the availability of information regarding the state of radio resources, from lower to upper layers, is of fundamental importance in the prosecution of the levels of QoS expected from those multimedia applications. In order to match applications requirements and the constraints of the mobile radio channel, in the last few years researches have proposed a new paradigm for the layered architecture for communications: the cross-layer design framework. In a general way, the cross-layer design paradigm refers to a protocol design in which the dependence between protocol layers is actively exploited, by breaking out the stringent rules which restrict the communication only between adjacent layers in the original reference model, and allowing direct interaction among different layers of the stack. An efficient management of the set of available radio resources demand for the implementation of efficient and low complexity packet schedulers which prioritize user’s transmissions according to inputs provided from lower as well as upper layers in the protocol stack, fully compliant with the cross-layer design paradigm. Specifically, efficiently designed packet schedulers for 4G networks should result in the maximization of the capacity available, through the consideration of the limitations imposed by the mobile radio channel and comply with the set of QoS requirements from the application layer. IEEE 802.16e standard, also named as Mobile WiMAX, seems to comply with the specifications of 4G mobile networks. The scalable architecture, low cost implementation and high data throughput, enable efficient data multiplexing and low data latency, which are attributes essential to enable broadband data services. Also, the connection oriented approach of Its medium access layer is fully compliant with the quality of service demands from such applications. Therefore, Mobile WiMAX seems to be a promising 4G mobile wireless networks candidate. In this thesis it is proposed the investigation, design and implementation of packet scheduling algorithms for the efficient management of the set of available radio resources, in time, frequency and spatial domains of the Mobile WiMAX networks. The proposed algorithms combine input metrics from physical layer and QoS requirements from upper layers, according to the crosslayer design paradigm. Proposed schedulers are evaluated by means of system level simulations, conducted in a system level simulation platform implementing the physical and medium access control layers of the IEEE802.16e standard.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper was proposed the development of an heterogeneous system using the microcontroller (AT90CANI28) where the protocol model CAN and the standard IEEE 802.15.4 are connected. This module is able to manage and monitor sensors and actuators using CAN and, through the wireless standard 802.15.4, communicate with the other network modules. © 2011 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This research is concerned with the development of distributed real-time systems, in which software is used for the control of concurrent physical processes. These distributed control systems are required to periodically coordinate the operation of several autonomous physical processes, with the property of an atomic action. The implementation of this coordination must be fault-tolerant if the integrity of the system is to be maintained in the presence of processor or communication failures. Commit protocols have been widely used to provide this type of atomicity and ensure consistency in distributed computer systems. The objective of this research is the development of a class of robust commit protocols, applicable to the coordination of distributed real-time control systems. Extended forms of the standard two phase commit protocol, that provides fault-tolerant and real-time behaviour, were developed. Petri nets are used for the design of the distributed controllers, and to embed the commit protocol models within these controller designs. This composition of controller and protocol model allows the analysis of the complete system in a unified manner. A common problem for Petri net based techniques is that of state space explosion, a modular approach to both the design and analysis would help cope with this problem. Although extensions to Petri nets that allow module construction exist, generally the modularisation is restricted to the specification, and analysis must be performed on the (flat) detailed net. The Petri net designs for the type of distributed systems considered in this research are both large and complex. The top down, bottom up and hybrid synthesis techniques that are used to model large systems in Petri nets are considered. A hybrid approach to Petri net design for a restricted class of communicating processes is developed. Designs produced using this hybrid approach are modular and allow re-use of verified modules. In order to use this form of modular analysis, it is necessary to project an equivalent but reduced behaviour on the modules used. These projections conceal events local to modules that are not essential for the purpose of analysis. To generate the external behaviour, each firing sequence of the subnet is replaced by an atomic transition internal to the module, and the firing of these transitions transforms the input and output markings of the module. Thus local events are concealed through the projection of the external behaviour of modules. This hybrid design approach preserves properties of interest, such as boundedness and liveness, while the systematic concealment of local events allows the management of state space. The approach presented in this research is particularly suited to distributed systems, as the underlying communication model is used as the basis for the interconnection of modules in the design procedure. This hybrid approach is applied to Petri net based design and analysis of distributed controllers for two industrial applications that incorporate the robust, real-time commit protocols developed. Temporal Petri nets, which combine Petri nets and temporal logic, are used to capture and verify causal and temporal aspects of the designs in a unified manner.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Supply chains have become an important focus for competitive advantage. The performance of a company increasingly depends on its ability to maintain effective and efficient relationships with its suppliers and customers. The extended enterprise (i.e. composed of several partners) needs to be dynamically formed in order to be agile and adaptable. According to the Digital Manufacturing paradigm, companies have to be able to quickly share and disseminate information regarding planning, designing and manufacturing of products. Additionally, they must be responsive to all technical and business determinants, as well as be assessed and certified for guaranteed performance. The current research intends to present a solution for the dynamic composition of the extended enterprise, formed to take advantage of market opportunities quickly and efficiently. A construction model was developed. This construction model consists of: information model, protocol model and process model. The information model has been defined based on the concepts of Supply Chain Operations Reference model (SCOR®). In this model is defined information for negotiating the participation of candidate companies in the dynamic establishment of a network for responding to a given demand for developing and manufacturing products, in seven steps as follows: request for information; request for qualification; alignment of strategy; request for proposal; request for quotation; compatibility of process; and compatibility of system. The protocol model has been elaborated and inspired in the OSI, this model provides a framework for linking customers and suppliers, indicates a sequence to be followed, in order to selecte companies to become suppliers. The process model has been implemented by means of process modeling according to the BPMN standard and, in turn, implemented as a web-based application that runs the process through its several steps, which uses forms to gather data. An application example in the context of the oil and gas industry is used for demonstrating the solution concept.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The IEEE 802.15.4 protocol has the ability to support time-sensitive Wireless Sensor Network (WSN) applications due to the Guaranteed Time Slot (GTS) Medium Access Control mechanism. Recently, several analytical and simulation models of the IEEE 802.15.4 protocol have been proposed. Nevertheless, currently available simulation models for this protocol are both inaccurate and incomplete, and in particular they do not support the GTS mechanism. In this paper, we propose an accurate OPNET simulation model, with focus on the implementation of the GTS mechanism. The motivation that has driven this work is the validation of the Network Calculus based analytical model of the GTS mechanism that has been previously proposed and to compare the performance evaluation of the protocol as given by the two alternative approaches. Therefore, in this paper we contribute an accurate OPNET model for the IEEE 802.15.4 protocol. Additionally, and probably more importantly, based on the simulation model we propose a novel methodology to tune the protocol parameters such that a better performance of the protocol can be guaranteed, both concerning maximizing the throughput of the allocated GTS as well as concerning minimizing frame delay.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

As the development of integrated circuit technology continues to follow Moore’s law the complexity of circuits increases exponentially. Traditional hardware description languages such as VHDL and Verilog are no longer powerful enough to cope with this level of complexity and do not provide facilities for hardware/software codesign. Languages such as SystemC are intended to solve these problems by combining the powerful expression of high level programming languages and hardware oriented facilities of hardware description languages. To fully replace older languages in the desing flow of digital systems SystemC should also be synthesizable. The devices required by modern high speed networks often share the same tight constraints for e.g. size, power consumption and price with embedded systems but have also very demanding real time and quality of service requirements that are difficult to satisfy with general purpose processors. Dedicated hardware blocks of an application specific instruction set processor are one way to combine fast processing speed, energy efficiency, flexibility and relatively low time-to-market. Common features can be identified in the network processing domain making it possible to develop specialized but configurable processor architectures. One such architecture is the TACO which is based on transport triggered architecture. The architecture offers a high degree of parallelism and modularity and greatly simplified instruction decoding. For this M.Sc.(Tech) thesis, a simulation environment for the TACO architecture was developed with SystemC 2.2 using an old version written with SystemC 1.0 as a starting point. The environment enables rapid design space exploration by providing facilities for hw/sw codesign and simulation and an extendable library of automatically configured reusable hardware blocks. Other topics that are covered are the differences between SystemC 1.0 and 2.2 from the viewpoint of hardware modeling, and compilation of a SystemC model into synthesizable VHDL with Celoxica Agility SystemC Compiler. A simulation model for a processor for TCP/IP packet validation was designed and tested as a test case for the environment.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Current protocols of anthracycline-induced cardiomyopathy in rabbits present with high premature mortality and nephrotoxicity, thus rendering them unsuitable for studies requiring long-term functional evaluation of myocardial function (e.g., stem cell therapy). We compared two previously described protocols to an in-house developed protocol in three groups: Group DOX2 received doxorubicin 2 mg/kg/week (8 weeks); Group DAU3 received daunorubicin 3 mg/kg/week (10 weeks); and Group DAU4 received daunorubicin 4 mg/kg/week (6 weeks). A cohort of rabbits received saline (control). Results of blood tests, cardiac troponin I, echocardiography, and histopathology were analysed. Whilst DOX2 and DAU3 rabbits showed high premature mortality (50% and 33%, resp.), DAU4 rabbits showed 7.6% premature mortality. None of DOX2 rabbits developed overt dilated cardiomyopathy; 66% of DAU3 rabbits developed overt dilated cardiomyopathy and quickly progressed to severe congestive heart failure. Interestingly, 92% of DAU4 rabbits showed overt dilated cardiomyopathy and 67% developed congestive heart failure exhibiting stable disease. DOX2 and DAU3 rabbits showed alterations of renal function, with DAU3 also exhibiting hepatic function compromise. Thus, a shortened protocol of anthracycline-induced cardiomyopathy as in DAU4 group results in high incidence of overt dilated cardiomyopathy, which insidiously progressed to congestive heart failure, associated to reduced systemic compromise and very low premature mortality.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Faced with an imminent restructuring of the electric power system, over the past few years many countries have invested in a new paradigm known as Smart Grid. This paradigm targets optimization and automation of electric power network, using advanced information and communication technologies. Among the main communication protocols for Smart Grids we have the DNP3 protocol, which provides secure data transmission with moderate rates. The IEEE 802.15.4 is another communication protocol also widely used in Smart Grid, especially in the so-called Home Area Network (HAN). Thus, many applications of Smart Grid depends on the interaction of these two protocols. This paper proposes modeling, in the traditional network simulator NS-2, the integration of DNP3 protocol and the IEEE 802.15.4 wireless standard for low cost simulations of Smart Grid applications.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

During orthodontic tooth movement (OTM), alveolar bone is resorbed by osteoclasts in compression sites (CS) and is deposited by osteoblasts in tension sites (TS). The aim of this study was to develop a standardized OTM protocol in mice and to investigate the expression of bone resorption and deposition markers in CS and TS. An orthodontic appliance was placed in C57BL6/J mice. To define the ideal orthodontic force, the molars of the mice were subjected to forces of 0.1 N, 0.25 N, 0.35 N and 0.5 N. The expression of mediators that are involved in bone remodeling at CS and TS was analyzed using a Real-Time PCR. The data revealed that a force of 0.35 N promoted optimal OTM and osteoclast recruitment without root resorption. The levels of TNF-alpha, RANKL, MMP13 and OPG were all altered in CS and TS. Whereas TNF-a and Cathepsin K exhibited elevated levels in CS. RUNX2 and OCN levels were higher in TS. Our results suggest that 0.35 N is the ideal force for OTM in mice and has no side effects. Moreover, the expression of bone remodeling markers differed between the compression and the tension areas, potentially explaining the distinct cellular migration and differentiation patterns in each of these sites. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Fractures of the keel bone, a bone extending ventrally from the sternum, are a serious health and welfare problem in free range laying hens. Recent findings suggest that a major cause of keel damage within extensive systems is collisions with internal housing structures, though investigative efforts have been hindered by difficulties in examining mechanisms and likely influencing factors at the moment of fracture. The objectives of this study were to develop an ex vivo impact protocol to model bone fracture in hens caused by collision, to assess impact and bird-related factors influencing fracture occurrence and severity, and to identify correlations of mechanical and structural properties between different skeletal sites. We induced keel bone fractures in euthanized hens using a drop-weight impact tester able to generate a range of impact energies, producing fractures that replicate those commonly found in commercial settings. The results demonstrated that impact energies of a similar order to those expected in normal housing were able to produce fractures, and that greater collision energies resulted in an increased likelihood of fractures and of greater severity. Relationships were also seen with keel's lateral surface bone mineral density, and the peak reactive force (strength) at the base of the manubrial spine. Correlations were also identified between the keel and long bones with respect to both strength and bone mineral density. This is the first study able to relate impact and bone characteristics with keel bone fracture at the moment of collision. Greater understanding of these relationships will provide means to reduce levels of breakage and severity in commercial systems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Ocean Model Intercomparison Project (OMIP) aims to provide a framework for evaluating, understanding, and improving the ocean and sea-ice components of global climate and earth system models contributing to the Coupled Model Intercomparison Project Phase 6 (CMIP6). OMIP addresses these aims in two complementary manners: (A) by providing an experimental protocol for global ocean/sea-ice models run with a prescribed atmospheric forcing, (B) by providing a protocol for ocean diagnostics to be saved as part of CMIP6. We focus here on the physical component of OMIP, with a companion paper (Orr et al., 2016) offering details for the inert chemistry and interactive biogeochemistry. The physical portion of the OMIP experimental protocol follows that of the interannual Coordinated Ocean-ice Reference Experiments (CORE-II). Since 2009, CORE-I (Normal Year Forcing) and CORE-II have become the standard method to evaluate global ocean/sea-ice simulations and to examine mechanisms for forced ocean climate variability. The OMIP diagnostic protocol is relevant for any ocean model component of CMIP6, including the DECK (Diagnostic, Evaluation and Characterization of Klima experiments), historical simulations, FAFMIP (Flux Anomaly Forced MIP), C4MIP (Coupled Carbon Cycle Climate MIP), DAMIP (Detection and Attribution MIP), DCPP (Decadal Climate Prediction Project), ScenarioMIP (Scenario MIP), as well as the ocean-sea ice OMIP simulations. The bulk of this paper offers scientific rationale for saving these diagnostics.