953 resultados para Flower shows


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study MCF-7 breast cancer cell movement in a transwell apparatus. Various experimental conditions lead to a variety of monotone and nonmonotone responses which are difficult to interpret. We anticipate that the experimental results could be caused by cell-to-cell adhesion or volume exclusion. Without any modeling, it is impossible to understand the relative roles played by these two mechanisms. A lattice-based exclusion process random-walk model incorporating agent-to-agent adhesion is applied to the experimental system. Our combined experimental and modeling approach shows that a low value of cell-to-cell adhesion strength provides the best explanation of the experimental data suggesting that volume exclusion plays a more important role than cell-to-cell adhesion. This combined experimental and modeling study gives insight into the cell-level details and design of transwell assays.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The driving task requires sustained attention during prolonged periods, and can be performed in highly predictable or repetitive environments. Such conditions could create hypovigilance and impair performance towards critical events. Identifying such impairment in monotonous conditions has been a major subject of research, but no research to date has attempted to predict it in real-time. This pilot study aims to show that performance decrements due to monotonous tasks can be predicted through mathematical modelling taking into account sensation seeking levels. A short vigilance task sensitive to short periods of lapses of vigilance called Sustained Attention to Response Task is used to assess participants‟ performance. The framework for prediction developed on this task could be extended to a monotonous driving task. A Hidden Markov Model (HMM) is proposed to predict participants‟ lapses in alertness. Driver‟s vigilance evolution is modelled as a hidden state and is correlated to a surrogate measure: the participant‟s reactions time. This experiment shows that the monotony of the task can lead to an important decline in performance in less than five minutes. This impairment can be predicted four minutes in advance with an 86% accuracy using HMMs. This experiment showed that mathematical models such as HMM can efficiently predict hypovigilance through surrogate measures. The presented model could result in the development of an in-vehicle device that detects driver hypovigilance in advance and warn the driver accordingly, thus offering the potential to enhance road safety and prevent road crashes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As organizations reach higher levels of Business Process Management maturity, they tend to accumulate large collections of process models. These repositories may contain thousands of activities and be managed by different stakeholders with varying skills and responsibilities. However, while being of great value, these repositories induce high management costs. Thus, it becomes essential to keep track of the various model versions as they may mutually overlap, supersede one another and evolve over time. We propose an innovative versioning model and associated storage structure, specifically designed to maximize sharing across process model versions, and to automatically handle change propagation. The focal point of this technique is to version single process model fragments, rather than entire process models. Indeed empirical evidence shows that real-life process model repositories have numerous duplicate fragments. Experiments on two industrial datasets confirm the usefulness of our technique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This chapter provides an overview of the Editor, which is the core modeling component of the YAWL system. Specifically, it shows how to specify control-flow, data and resourcing requirements in a YAWL workflow model. Moreover, it describes the error reporting capabilities of the Editor and the features of its interchange format.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Myosin is believed to act as the molecular motor for many actin-based motility processes in eukaryotes. It is becoming apparent that a single species may possess multiple myosin isoforms, and at least seven distinct classes of myosin have been identified from studies of animals, fungi, and protozoans. The complexity of the myosin heavy-chain gene family in higher plants was investigated by isolating and characterizing myosin genomic and cDNA clones from Arabidopsis thaliana. Six myosin-like genes were identified from three polymerase chain reaction (PCR) products (PCR1, PCR11, PCR43) and three cDNA clones (ATM2, MYA2, MYA3). Sequence comparisons of the deduced head domains suggest that these myosins are members of two major classes. Analysis of the overall structure of the ATM2 and MYA2 myosins shows that they are similar to the previously-identified ATM1 and MYA1 myosins, respectively. The MYA3 appears to possess a novel tail domain, with five IQ repeats, a six-member imperfect repeat, and a segment of unique sequence. Northern blot analyses indicate that some of the Arabidopsis myosin genes are preferentially expressed in different plant organs. Combined with previous studies, these results show that the Arabidopsis genome contains at least eight myosin-like genes representing two distinct classes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Given that what students learn is so strongly related to how they learn, the modes of delivery and assessment that we as teachers provide them with have a major impact on their ability to learn. As this paper shows, good learning environments are constructed from a range of modes that respond to student learning styles and seek to align activities and learning outcomes with assessment tasks, to better accommodate a diversity of student learning styles and backgrounds. This paper uses a number of models of learning to critique and analyse the traditional practices of assessment in an architectural design class, and then proposes and reports on an alternative pattern of assessment. It discusses the issues of accommodating a group of first-year architecture students at Queensland University of Technology in 2009. These students arrived with diverse prior learning backgrounds, the group being evenly split between those with drawing capabilities and those without. They also had a variety of learning style preferences. The experiment in alternative assessment patterns presented here shows that what has traditionally been considered a diverse and difficult cohort of students can benefit from the assessment of a range of task types at different stages in the learning cycle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Existing court data suggest that adult Indigenous offenders are more likely than non-Indigenous defendants to be sentenced to prison but once imprisoned generally receive shorter terms. Using findings from international and Australian multivariate statistical analyses, this paper reviews the three key hypotheses advanced as plausible explanations for these differences: 1) differential involvement, 2) negative discrimination, 3) positive discrimination. Overall, prior research shows strong support for the differential involvement thesis, some support for positive discrimination and little foundation for negative discrimination in the sentencing of Indigenous defendants. Where discrimination is found, we argue that this may be explained by the lack of a more complete set of control variables in researchers’ multivariate models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In mobile videos, small viewing size and bitrate limitation often cause unpleasant viewing experiences, which is particularly important for fast-moving sports videos. For optimizing the overall user experience of viewing sports videos on mobile phones, this paper explores the benefits of emphasizing Region of Interest (ROI) by 1) zooming in and 2) enhancing the quality. The main goal is to measure the effectiveness of these two approaches and determine which one is more effective. To obtain a more comprehensive understanding of the overall user experience, the study considers user’s interest in video content and user’s acceptance of the perceived video quality, and compares the user experience in sports videos with other content types such as talk shows. The results from a user study with 40 subjects demonstrate that zooming and ROI-enhancement are both effective in improving the overall user experience with talk show and mid-shot soccer videos. However, for the full-shot scenes in soccer videos, only zooming is effective while ROI-enhancement has a negative effect. Moreover, user’s interest in video content directly affects not only the user experience and the acceptance of video quality, but also the effect of content type on the user experience. Finally, the overall user experience is closely related to the degree of the acceptance of video quality and the degree of the interest in video content. This study is valuable in exploiting effective approaches to improve user experience, especially in mobile sports video streaming contexts, whereby the available bandwidth is usually low or limited. It also provides further understanding of the influencing factors of user experience.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Increases in atmospheric concentrations of the greenhouse gases (GHGs) carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) due to human activities have been linked to climate change. GHG emissions from land use change and agriculture have been identified as significant contributors to both Australia’s and the global GHG budget. This is expected to increase over the coming decades as rates of agriculture intensification and land use change accelerate to support population growth and food production. Limited data exists on CO2, CH4 and N2O trace gas fluxes from subtropical or tropical soils and land uses. To develop effective mitigation strategies a full global warming potential (GWP) accounting methodology is required that includes emissions of the three primary greenhouse gases. Mitigation strategies that focus on one gas only can inadvertently increase emissions of another. For this reason, detailed inventories of GHGs from soils and vegetation under individual land uses are urgently required for subtropical Australia. This study aimed to quantify GHG emissions over two consecutive years from three major land uses; a well-established, unfertilized subtropical grass-legume pasture, a 30 year (lychee) orchard and a remnant subtropical Gallery rainforest, all located near Mooloolah, Queensland. GHG fluxes were measured using a combination of high resolution automated sampling, coarser spatial manual sampling and laboratory incubations. Comparison between the land uses revealed that land use change can have a substantial impact on the GWP on a landscape long after the deforestation event. The conversion of rainforest to agricultural land resulted in as much as a 17 fold increase in GWP, from 251 kg CO2 eq. ha-1 yr-1 in the rainforest to 889 kg CO2 eq. ha-1 yr-1 in the pasture to 2538 kg CO2 eq. ha-1 yr-1 in the lychee plantation. This increase resulted from altered N cycling and a reduction in the aerobic capacity of the soil in the pasture and lychee systems, enhancing denitrification and nitrification events, and reducing atmospheric CH4 uptake in the soil. High infiltration, drainage and subsequent soil aeration under the rainforest limited N2O loss, as well as promoting CH4 uptake of 11.2 g CH4-C ha-1 day-1. This was among the highest reported for rainforest systems, indicating that aerated subtropical rainforests can act as substantial sink of CH4. Interannual climatic variation resulted in significantly higher N2O emission from the pasture during 2008 (5.7 g N2O-N ha day) compared to 2007 (3.9 g N2O-N ha day), despite receiving nearly 500 mm less rainfall. Nitrous oxide emissions from the pasture were highest during the summer months and were highly episodic, related more to the magnitude and distribution of rain events rather than soil moisture alone. Mean N2O emissions from the lychee plantation increased from an average of 4.0 g N2O-N ha-1 day-1, to 19.8 g N2O-N ha-1 day-1 following a split application of N fertilizer (560 kg N ha-1, equivalent to 1 kg N tree-1). The timing of the split application was found to be critical to N2O emissions, with over twice as much lost following an application in spring (emission factor (EF): 1.79%) compared to autumn (EF: 0.91%). This was attributed to the hot and moist climatic conditions and a reduction in plant N uptake during the spring creating conditions conducive to N2O loss. These findings demonstrate that land use change in subtropical Australia can be a significant source of GHGs. Moreover, the study shows that modifying the timing of fertilizer application can be an efficient way of reducing GHG emissions from subtropical horticulture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Melt electrospinning is one aspect of electrospinning with relatively little published literature, although the technique avoids solvent accumulation and/or toxicity which is favoured in certain applications. In the study reported, we melt-electrospun blends of poly(ε-caprolactone) (PCL) and an amphiphilic diblock copolymer consisting of poly(ethylene glycol) and PCL segments (PEG-block-PCL). A custom-made electrospinning apparatus was built and various combinations of instrument parameters such as voltage and polymer feeding rate were investigated. Pure PEG-block-PCL copolymer melt electrospinning did not result in consistent and uniform fibres due to the low molecular weight, while blends of PCL and PEG-block-PCL, for some parameter combinations and certain weight ratios of the two components, were able to produce continuous fibres significantly thinner (average diameter of ca 2 µm) compared to pure PCL. The PCL fibres obtained had average diameters ranging from 6 to 33 µm and meshes were uniform for the lowest voltage employed while mesh uniformity decreased when the voltage was increased. This approach shows that PCL and blends of PEG-block-PCL and PCL can be readily processed by melt electrospinning to obtain fibrous meshes with varied average diameters and morphologies that are of interest for tissue engineering purposes. Copyright © 2010 Society of Chemical Industry

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For almost a decade before Hollywood existed, French firm Pathe towered over the early film industry with estimates of its share of all films sold around the world varying between 50-70%. Pathe was the first global entertainment company. This paper analyses its rise to market leadership by applying a theoretical framework drawn from the business literature on causes of industry dominance, which provides insights into how firms acquire and maintain market dominance and in this case the film industry. This paper uses evidence presented by film historians to argue that Pathe "fits" the expected theoretical model of a dominant firm because it had a marketing orientation, used an effective quality-based competitive strategy and possessed the six critical marketing capabilities that business research shows enable the best performing firms to consistently outperform rivals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research shows that gross pollutant traps (GPTs) continue to play an important role in preventing visible street waste—gross pollutants—from contaminating the environment. The demand for these GPTs calls for stringent quality control and this research provides a foundation to rigorously examine the devices. A novel and comprehensive testing approach to examine a dry sump GPT was developed. The GPT is designed with internal screens to capture gross pollutants—organic matter and anthropogenic litter. This device has not been previously investigated. Apart from the review of GPTs and gross pollutant data, the testing approach includes four additional aspects to this research, which are: field work and an historical overview of street waste/stormwater pollution, calibration of equipment, hydrodynamic studies and gross pollutant capture/retention investigations. This work is the first comprehensive investigation of its kind and provides valuable practical information for the current research and any future work pertaining to the operations of GPTs and management of street waste in the urban environment. Gross pollutant traps—including patented and registered designs developed by industry—have specific internal configurations and hydrodynamic separation characteristics which demand individual testing and performance assessments. Stormwater devices are usually evaluated by environmental protection agencies (EPAs), professional bodies and water research centres. In the USA, the American Society of Civil Engineers (ASCE) and the Environmental Water Resource Institute (EWRI) are examples of professional and research organisations actively involved in these evaluation/verification programs. These programs largely rely on field evaluations alone that are limited in scope, mainly for cost and logistical reasons. In Australia, evaluation/verification programs of new devices in the stormwater industry are not well established. The current limitations in the evaluation methodologies of GPTs have been addressed in this research by establishing a new testing approach. This approach uses a combination of physical and theoretical models to examine in detail the hydrodynamic and capture/retention characteristics of the GPT. The physical model consisted of a 50% scale model GPT rig with screen blockages varying from 0 to 100%. This rig was placed in a 20 m flume and various inlet and outflow operating conditions were modelled on observations made during the field monitoring of GPTs. Due to infrequent cleaning, the retaining screens inside the GPTs were often observed to be blocked with organic matter. Blocked screens can radically change the hydrodynamic and gross pollutant capture/retention characteristics of a GPT as shown from this research. This research involved the use of equipment, such as acoustic Doppler velocimeters (ADVs) and dye concentration (Komori) probes, which were deployed for the first time in a dry sump GPT. Hence, it was necessary to rigorously evaluate the capability and performance of these devices, particularly in the case of the custom made Komori probes, about which little was known. The evaluation revealed that the Komori probes have a frequency response of up to 100 Hz —which is dependent upon fluid velocities—and this was adequate to measure the relevant fluctuations of dye introduced into the GPT flow domain. The outcome of this evaluation resulted in establishing methodologies for the hydrodynamic measurements and gross pollutant capture/retention experiments. The hydrodynamic measurements consisted of point-based acoustic Doppler velocimeter (ADV) measurements, flow field particle image velocimetry (PIV) capture, head loss experiments and computational fluid dynamics (CFD) simulation. The gross pollutant capture/retention experiments included the use of anthropogenic litter components, tracer dye and custom modified artificial gross pollutants. Anthropogenic litter was limited to tin cans, bottle caps and plastic bags, while the artificial pollutants consisted of 40 mm spheres with a range of four buoyancies. The hydrodynamic results led to the definition of global and local flow features. The gross pollutant capture/retention results showed that when the internal retaining screens are fully blocked, the capture/retention performance of the GPT rapidly deteriorates. The overall results showed that the GPT will operate efficiently until at least 70% of the screens are blocked, particularly at high flow rates. This important finding indicates that cleaning operations could be more effectively planned when the GPT capture/retention performance deteriorates. At lower flow rates, the capture/retention performance trends were reversed. There is little difference in the poor capture/retention performance between a fully blocked GPT and a partially filled or empty GPT with 100% screen blockages. The results also revealed that the GPT is designed with an efficient high flow bypass system to avoid upstream blockages. The capture/retention performance of the GPT at medium to high inlet flow rates is close to maximum efficiency (100%). With regard to the design appraisal of the GPT, a raised inlet offers a better capture/retention performance, particularly at lower flow rates. Further design appraisals of the GPT are recommended.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a digital world, users’ Personally Identifiable Information (PII) is normally managed with a system called an Identity Management System (IMS). There are many types of IMSs. There are situations when two or more IMSs need to communicate with each other (such as when a service provider needs to obtain some identity information about a user from a trusted identity provider). There could be interoperability issues when communicating parties use different types of IMS. To facilitate interoperability between different IMSs, an Identity Meta System (IMetS) is normally used. An IMetS can, at least theoretically, join various types of IMSs to make them interoperable and give users the illusion that they are interacting with just one IMS. However, due to the complexity of an IMS, attempting to join various types of IMSs is a technically challenging task, let alone assessing how well an IMetS manages to integrate these IMSs. The first contribution of this thesis is the development of a generic IMS model called the Layered Identity Infrastructure Model (LIIM). Using this model, we develop a set of properties that an ideal IMetS should provide. This idealized form is then used as a benchmark to evaluate existing IMetSs. Different types of IMS provide varying levels of privacy protection support. Unfortunately, as observed by Jøsang et al (2007), there is insufficient privacy protection in many of the existing IMSs. In this thesis, we study and extend a type of privacy enhancing technology known as an Anonymous Credential System (ACS). In particular, we extend the ACS which is built on the cryptographic primitives proposed by Camenisch, Lysyanskaya, and Shoup. We call this system the Camenisch, Lysyanskaya, Shoup - Anonymous Credential System (CLS-ACS). The goal of CLS-ACS is to let users be as anonymous as possible. Unfortunately, CLS-ACS has problems, including (1) the concentration of power to a single entity - known as the Anonymity Revocation Manager (ARM) - who, if malicious, can trivially reveal a user’s PII (resulting in an illegal revocation of the user’s anonymity), and (2) poor performance due to the resource-intensive cryptographic operations required. The second and third contributions of this thesis are the proposal of two protocols that reduce the trust dependencies on the ARM during users’ anonymity revocation. Both protocols distribute trust from the ARM to a set of n referees (n > 1), resulting in a significant reduction of the probability of an anonymity revocation being performed illegally. The first protocol, called the User Centric Anonymity Revocation Protocol (UCARP), allows a user’s anonymity to be revoked in a user-centric manner (that is, the user is aware that his/her anonymity is about to be revoked). The second protocol, called the Anonymity Revocation Protocol with Re-encryption (ARPR), allows a user’s anonymity to be revoked by a service provider in an accountable manner (that is, there is a clear mechanism to determine which entity who can eventually learn - and possibly misuse - the identity of the user). The fourth contribution of this thesis is the proposal of a protocol called the Private Information Escrow bound to Multiple Conditions Protocol (PIEMCP). This protocol is designed to address the performance issue of CLS-ACS by applying the CLS-ACS in a federated single sign-on (FSSO) environment. Our analysis shows that PIEMCP can both reduce the amount of expensive modular exponentiation operations required and lower the risk of illegal revocation of users’ anonymity. Finally, the protocols proposed in this thesis are complex and need to be formally evaluated to ensure that their required security properties are satisfied. In this thesis, we use Coloured Petri nets (CPNs) and its corresponding state space analysis techniques. All of the protocols proposed in this thesis have been formally modeled and verified using these formal techniques. Therefore, the fifth contribution of this thesis is a demonstration of the applicability of CPN and its corresponding analysis techniques in modeling and verifying privacy enhancing protocols. To our knowledge, this is the first time that CPN has been comprehensively applied to model and verify privacy enhancing protocols. From our experience, we also propose several CPN modeling approaches, including complex cryptographic primitives (such as zero-knowledge proof protocol) modeling, attack parameterization, and others. The proposed approaches can be applied to other security protocols, not just privacy enhancing protocols.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper compares the performance of two droop control schemes in a hybrid microgrid. With presence of both converter interfaced and inertial sources, the droop controller share power in a decentralized fashion. Both the droop controllers facilitate reactive power sharing based on voltage droop. However in frequency droop control, the real power sharing depends on the frequency, while in angle droop control, it depends on output voltage angle. For converter interfaced sources this reference voltage is tracked while for inertial DG, reference power for the prime mover is calculated from the reference angle with the proposed angle control scheme. This coordinated control scheme shows significant improvement in system performance. The comparison with the conventional frequency droop shows that the angle control scheme shares power with much lower frequency deviation. This is a significant improvement particularly in a frequent load changing scenario.