332 resultados para Source areas
Resumo:
Demands for delivering high instantaneous power in a compressed form (pulse shape) have widely increased during recent decades. The flexible shapes with variable pulse specifications offered by pulsed power have made it a practical and effective supply method for an extensive range of applications. In particular, the release of basic subatomic particles (i.e. electron, proton and neutron) in an atom (ionization process) and the synthesizing of molecules to form ions or other molecules are among those reactions that necessitate large amount of instantaneous power. In addition to the decomposition process, there have recently been requests for pulsed power in other areas such as in the combination of molecules (i.e. fusion, material joining), gessoes radiations (i.e. electron beams, laser, and radar), explosions (i.e. concrete recycling), wastewater, exhausted gas, and material surface treatments. These pulses are widely employed in the silent discharge process in all types of materials (including gas, fluid and solid); in some cases, to form the plasma and consequently accelerate the associated process. Due to this fast growing demand for pulsed power in industrial and environmental applications, the exigency of having more efficient and flexible pulse modulators is now receiving greater consideration. Sensitive applications, such as plasma fusion and laser guns also require more precisely produced repetitive pulses with a higher quality. Many research studies are being conducted in different areas that need a flexible pulse modulator to vary pulse features to investigate the influence of these variations on the application. In addition, there is the need to prevent the waste of a considerable amount of energy caused by the arc phenomena that frequently occur after the plasma process. The control over power flow during the supply process is a critical skill that enables the pulse supply to halt the supply process at any stage. Different pulse modulators which utilise different accumulation techniques including Marx Generators (MG), Magnetic Pulse Compressors (MPC), Pulse Forming Networks (PFN) and Multistage Blumlein Lines (MBL) are currently employed to supply a wide range of applications. Gas/Magnetic switching technologies (such as spark gap and hydrogen thyratron) have conventionally been used as switching devices in pulse modulator structures because of their high voltage ratings and considerably low rising times. However, they also suffer from serious drawbacks such as, their low efficiency, reliability and repetition rate, and also their short life span. Being bulky, heavy and expensive are the other disadvantages associated with these devices. Recently developed solid-state switching technology is an appropriate substitution for these switching devices due to the benefits they bring to the pulse supplies. Besides being compact, efficient, reasonable and reliable, and having a long life span, their high frequency switching skill allows repetitive operation of pulsed power supply. The main concerns in using solid-state transistors are the voltage rating and the rising time of available switches that, in some cases, cannot satisfy the application’s requirements. However, there are several power electronics configurations and techniques that make solid-state utilisation feasible for high voltage pulse generation. Therefore, the design and development of novel methods and topologies with higher efficiency and flexibility for pulsed power generators have been considered as the main scope of this research work. This aim is pursued through several innovative proposals that can be classified under the following two principal objectives. • To innovate and develop novel solid-state based topologies for pulsed power generation • To improve available technologies that have the potential to accommodate solid-state technology by revising, reconfiguring and adjusting their structure and control algorithms. The quest to distinguish novel topologies for a proper pulsed power production was begun with a deep and through review of conventional pulse generators and useful power electronics topologies. As a result of this study, it appears that efficiency and flexibility are the most significant demands of plasma applications that have not been met by state-of-the-art methods. Many solid-state based configurations were considered and simulated in order to evaluate their potential to be utilised in the pulsed power area. Parts of this literature review are documented in Chapter 1 of this thesis. Current source topologies demonstrate valuable advantages in supplying the loads with capacitive characteristics such as plasma applications. To investigate the influence of switching transients associated with solid-state devices on rise time of pulses, simulation based studies have been undertaken. A variable current source is considered to pump different current levels to a capacitive load, and it was evident that dissimilar dv/dts are produced at the output. Thereby, transient effects on pulse rising time are denied regarding the evidence acquired from this examination. A detailed report of this study is given in Chapter 6 of this thesis. This study inspired the design of a solid-state based topology that take advantage of both current and voltage sources. A series of switch-resistor-capacitor units at the output splits the produced voltage to lower levels, so it can be shared by the switches. A smart but complicated switching strategy is also designed to discharge the residual energy after each supply cycle. To prevent reverse power flow and to reduce the complexity of the control algorithm in this system, the resistors in common paths of units are substituted with diode rectifiers (switch-diode-capacitor). This modification not only gives the feasibility of stopping the load supply process to the supplier at any stage (and consequently saving energy), but also enables the converter to operate in a two-stroke mode with asymmetrical capacitors. The components’ determination and exchanging energy calculations are accomplished with respect to application specifications and demands. Both topologies were simply modelled and simulation studies have been carried out with the simplified models. Experimental assessments were also executed on implemented hardware and the approaches verified the initial analysis. Reports on details of both converters are thoroughly discussed in Chapters 2 and 3 of the thesis. Conventional MGs have been recently modified to use solid-state transistors (i.e. Insulated gate bipolar transistors) instead of magnetic/gas switching devices. Resistive insulators previously used in their structures are substituted by diode rectifiers to adjust MGs for a proper voltage sharing. However, despite utilizing solid-state technology in MGs configurations, further design and control amendments can still be made to achieve an improved performance with fewer components. Considering a number of charging techniques, resonant phenomenon is adopted in a proposal to charge the capacitors. In addition to charging the capacitors at twice the input voltage, triggering switches at the moment at which the conducted current through switches is zero significantly reduces the switching losses. Another configuration is also introduced in this research for Marx topology based on commutation circuits that use a current source to charge the capacitors. According to this design, diode-capacitor units, each including two Marx stages, are connected in cascade through solid-state devices and aggregate the voltages across the capacitors to produce a high voltage pulse. The polarity of voltage across one capacitor in each unit is reversed in an intermediate mode by connecting the commutation circuit to the capacitor. The insulation of input side from load side is provided in this topology by disconnecting the load from the current source during the supply process. Furthermore, the number of required fast switching devices in both designs is reduced to half of the number used in a conventional MG; they are replaced with slower switches (such as Thyristors) that need simpler driving modules. In addition, the contributing switches in discharging paths are decreased to half; this decrease leads to a reduction in conduction losses. Associated models are simulated, and hardware tests are performed to verify the validity of proposed topologies. Chapters 4, 5 and 7 of the thesis present all relevant analysis and approaches according to these topologies.
Resumo:
Mixture models are a flexible tool for unsupervised clustering that have found popularity in a vast array of research areas. In studies of medicine, the use of mixtures holds the potential to greatly enhance our understanding of patient responses through the identification of clinically meaningful clusters that, given the complexity of many data sources, may otherwise by intangible. Furthermore, when developed in the Bayesian framework, mixture models provide a natural means for capturing and propagating uncertainty in different aspects of a clustering solution, arguably resulting in richer analyses of the population under study. This thesis aims to investigate the use of Bayesian mixture models in analysing varied and detailed sources of patient information collected in the study of complex disease. The first aim of this thesis is to showcase the flexibility of mixture models in modelling markedly different types of data. In particular, we examine three common variants on the mixture model, namely, finite mixtures, Dirichlet Process mixtures and hidden Markov models. Beyond the development and application of these models to different sources of data, this thesis also focuses on modelling different aspects relating to uncertainty in clustering. Examples of clustering uncertainty considered are uncertainty in a patient’s true cluster membership and accounting for uncertainty in the true number of clusters present. Finally, this thesis aims to address and propose solutions to the task of comparing clustering solutions, whether this be comparing patients or observations assigned to different subgroups or comparing clustering solutions over multiple datasets. To address these aims, we consider a case study in Parkinson’s disease (PD), a complex and commonly diagnosed neurodegenerative disorder. In particular, two commonly collected sources of patient information are considered. The first source of data are on symptoms associated with PD, recorded using the Unified Parkinson’s Disease Rating Scale (UPDRS) and constitutes the first half of this thesis. The second half of this thesis is dedicated to the analysis of microelectrode recordings collected during Deep Brain Stimulation (DBS), a popular palliative treatment for advanced PD. Analysis of this second source of data centers on the problems of unsupervised detection and sorting of action potentials or "spikes" in recordings of multiple cell activity, providing valuable information on real time neural activity in the brain.
Resumo:
Vacuuming can be a source of indoor exposure to biological and non-biological aerosols, although there is little data that describes the magnitude of emissions from the vacuum cleaner itself. We therefore sought to quantify emission rates of particles and bacteria from a large group of vacuum cleaners and investigate their potential determinants, including temperature, dust bags, exhaust filters, price and age. Emissions of particles between 0.009 and 20 µm and bacteria were measured from 21 vacuums. Ultrafine (<100 nm) particle emission rates ranged from 4.0 × 10^6 to 1.1 × 10^11 particles min-1. Emission of 0.54 to 20 µm particles ranged from 4.0 × 10^4 to 1.2 × 10^9 particles min-1. PM2.5 emissions were between 2.4 × 10-1 and 5.4 × 10^3 µg min-1. Bacteria emissions ranged from 0 to 7.4 × 10^5 bacteria min-1 and were poorly correlated with dust bag bacteria content and particle emissions. Large variability in emission of all parameters was observed across the 21 vacuums we assessed, which was largely not attributable to the range of determinant factors we assessed. Vacuum cleaner emissions contribute to indoor exposure to non-biological and biological aerosols when vacuuming, and this may vary markedly depending on the vacuum used.
Resumo:
In keeping with the proliferation of free software development initiatives and the increased interest in the business process management domain, many open source workflow and business process management systems have appeared during the last few years and are now under active development. This upsurge gives rise to two important questions: What are the capabilities of these systems? and How do they compare to each other and to their closed source counterparts? In other words: What is the state-of-the-art in the area?. To gain an insight into these questions, we have conducted an in-depth analysis of three of the major open source workflow management systems – jBPM, OpenWFE, and Enhydra Shark, the results of which are reported here. This analysis is based on the workflow patterns framework and provides a continuation of the series of evaluations performed using the same framework on closed source systems, business process modelling languages, and web-service composition standards. The results from evaluations of the three open source systems are compared with each other and also with the results from evaluations of three representative closed source systems: Staffware, WebSphere MQ, and Oracle BPEL PM. The overall conclusion is that open source systems are targeted more toward developers rather than business analysts. They generally provide less support for the patterns than closed source systems, particularly with respect to the resource perspective, i.e. the various ways in which work is distributed amongst business users and managed through to completion.
Resumo:
Contamination of packaged foods due to micro-organisms entering through air leaks can cause serious public health issues and cost companies large amounts of money due to product recalls, consumer impact and subsequent loss of market share. The main source of contamination is leaks in packaging which allow air, moisture and microorganisms to enter the package. In the food processing and packaging industry worldwide, there is an increasing demand for cost effective state of the art inspection technologies that are capable of reliably detecting leaky seals and delivering products at six-sigma. The new technology will develop non-destructive testing technology using digital imaging and sensing combined with a differential vacuum technique to assess seal integrity of food packages on a high-speed production line. The cost of leaky packages in Australian food industries is estimated close to AUD $35 Million per year. Contamination of packaged foods due to micro-organisms entering through air leaks can cause serious public health issues and cost companies large sums of money due to product recalls, compensation claims and loss of market share. The main source of contamination is leaks in packaging which allow air, moisture and micro-organisms to enter the package. Flexible plastic packages are widely used, and are the least expensive form of retaining the quality of the product. These packets can be used to seal, and therefore maximise, the shelf life of both dry and moist products. The seals of food packages need to be airtight so that the food content is not contaminated due to contact with microorganisms that enter as a result of air leakage. Airtight seals also extend the shelf life of packaged foods, and manufacturers attempt to prevent food products with leaky seals being sold to consumers. There are many current NDT (non-destructive testing) methods of testing the seal of flexible packages best suited to random sampling, and for laboratory purposes. The three most commonly used methods are vacuum/pressure decay, bubble test, and helium leak detection. Although these methods can detect very fine leaks, they are limited by their high processing time and are not viable in a production line. Two nondestructive in-line packaging inspection machines are currently available and are discussed in the literature review. The detailed design and development of the High-Speed Sensing and Detection System (HSDS) is the fundamental requirement of this project and the future prototype and production unit. Successful laboratory testing was completed and a methodical design procedure was needed for a successful concept. The Mechanical tests confirmed the vacuum hypothesis and seal integrity with good consistent results. Electrically, the testing also provided solid results to enable the researcher to move the project forward with a certain amount of confidence. The laboratory design testing allowed the researcher to confirm theoretical assumptions before moving into the detailed design phase. Discussion on the development of the alternative concepts in both mechanical and electrical disciplines enables the researcher to make an informed decision. Each major mechanical and electrical component is detailed through the research and design process. The design procedure methodically works through the various major functions both from a mechanical and electrical perspective. It opens up alternative ideas for the major components that although are sometimes not practical in this application, show that the researcher has exhausted all engineering and functionality thoughts. Further concepts were then designed and developed for the entire HSDS unit based on previous practice and theory. In the future, it would be envisaged that both the Prototype and Production version of the HSDS would utilise standard industry available components, manufactured and distributed locally. Future research and testing of the prototype unit could result in a successful trial unit being incorporated in a working food processing production environment. Recommendations and future works are discussed, along with options in other food processing and packaging disciplines, and other areas in the non-food processing industry.
Resumo:
As civil infrastructures such as bridges age, there is a concern for safety and a need for cost-effective and reliable monitoring tool. Different diagnostic techniques are available nowadays for structural health monitoring (SHM) of bridges. Acoustic emission is one such technique with potential of predicting failure. The phenomenon of rapid release of energy within a material by crack initiation or growth in form of stress waves is known as acoustic emission (AE). AEtechnique involves recording the stress waves bymeans of sensors and subsequent analysis of the recorded signals,which then convey information about the nature of the source. AE can be used as a local SHM technique to monitor specific regions with visible presence of cracks or crack prone areas such as welded regions and joints with bolted connection or as a global technique to monitor the whole structure. Strength of AE technique lies in its ability to detect active crack activity, thus helping in prioritising maintenance work by helping focus on active cracks rather than dormant cracks. In spite of being a promising tool, some challenges do still exist behind the successful application of AE technique. One is the generation of large amount of data during the testing; hence an effective data analysis and management is necessary, especially for long term monitoring uses. Complications also arise as a number of spurious sources can giveAEsignals, therefore, different source discrimination strategies are necessary to identify genuine signals from spurious ones. Another major challenge is the quantification of damage level by appropriate analysis of data. Intensity analysis using severity and historic indices as well as b-value analysis are some important methods and will be discussed and applied for analysis of laboratory experimental data in this paper.
Resumo:
If Project Management (PM) is a well-accepted mode of managing organizations, more and more organizations are adopting PM in order to satisfy the diversified needs of application areas within a variety of industries and organizations. Concurrently, the number of PM practitioners and people involved at various level of qualification is vigorously rising. Thus the importance to characterize, define and understand this field and its underlying strength, basis and development is paramount. For this purpose we will referee to sociology of actor-networks and qualitative scientometrics leading to the choice of the co-word analysis method in enabling us to capture the project management field and its dynamics. Results of a study based on the analysis of EBSCO Business Source Premier Database will be presented and some future trends and scenarios proposed. The main following trends are confirmed, in alignment with previous studies: continuous interest for the “cost engineering” aspects, on going interest for Economic aspects and contracts, how to deal with various project types (categorizations), the integration with Supply Chain Management and Learning and Knowledge Management. Furthermore besides these continuous trends, we can note new areas of interest: the link between strategy and project, Governance, the importance of maturity (organizational performance and metrics, control) and Change Management. We see the actors (Professional Bodies, Governmental Bodies, Agencies, Universities, Industries, Researchers, and Practitioners) reinforcing their competing/cooperative strategies in the development of standards and certifications and moving to more “business oriented” relationships with their members and main stakeholders (Governments, Institutions like European Community, Industries, Agencies, NGOs…), at least at central level.
Resumo:
Road traffic noise affects the quality of life in the areas adjoining the road. The effect of traffic noise on people is wide ranging and may include sleep disturbance and negative impact on work efficiency. To address the problem of traffic noise, it is necessary to estimate the noise level. For this, a number of noise estimation models have been developed which can estimate noise at the receptor points, based on simple configuration of buildings. However, for a real world situation we have multiple buildings forming built-up area. In such a situation, it is almost impossible to consider multiple diffractions and reflections in sound propagation from the source to the receptor point. An engineering solution to such a real world problem is needed to estimate noise levels in built-up area.
Resumo:
Appearance-based loop closure techniques, which leverage the high information content of visual images and can be used independently of pose, are now widely used in robotic applications. The current state-of-the-art in the field is Fast Appearance-Based Mapping (FAB-MAP) having been demonstrated in several seminal robotic mapping experiments. In this paper, we describe OpenFABMAP, a fully open source implementation of the original FAB-MAP algorithm. Beyond the benefits of full user access to the source code, OpenFABMAP provides a number of configurable options including rapid codebook training and interest point feature tuning. We demonstrate the performance of OpenFABMAP on a number of published datasets and demonstrate the advantages of quick algorithm customisation. We present results from OpenFABMAP’s application in a highly varied range of robotics research scenarios.
Resumo:
This paper reports an observation investigation of pedestrian crossing behaviors conducted at signalized crosswalks in urban areas in Singapore and Beijing on typical workdays. Each crosswalk was observed 3 times in different periods, i.e. normal hours, lunch hours, and rush hours. A total of 103,956 pedestrians were observed. The results showed that lane type, lane number, intersection type, and culture had significant effect on illegal pedestrian crossing in both cities; observation period had no significant effect on pedestrian violation in both cities; the violation rate in Singapore was lower than that in Beijing. However, observers reported that illegal crossing of vulnerable pedestrians, e.g. pregnant, the lame, old men and women, was more obvious in Singapore than that in Beijing. Evidence proved the hypothesis that the violations were related to pedestrians’ cognition of the definition of safety.
Resumo:
Knowledge has been recognised as a powerful yet intangible asset, which is difficult to manage. This is especially true in a project environment where there is the potential to repeat mistakes, rather than learn from previous experiences. The literature in the project management field has recognised the importance of knowledge sharing (KS) within and between projects. However, studies in that field focus primarily on KS mechanisms including lessons learned (LL) and post project reviews as the source of knowledge for future projects, and only some preliminary research has been carried out on the aspects of project management offices (PMOs) and organisational culture (OC) in KS. This study undertook to investigate KS behaviours in an inter-project context, with a particular emphasis on the role of trust, OC and a range of knowledge sharing mechanisms (KSM) in achieving successful inter-project knowledge sharing (I-PKS). An extensive literature search resulted in the development of an I-PKS Framework, which defined the scope of the research and shaped its initial design. The literature review indicated that existing research relating to the three factors of OC, trust and KSM remains inadequate in its ability to fully explain the role of these contextual factors. In particular, the literature review identified these areas of interest: (1) the conflicting answers to some of the major questions related to KSM, (2) the limited empirical research on the role of different trust dimensions, (3) limited empirical evidence of the role of OC in KS, and (4) the insufficient research on KS in an inter-project context. The resulting Framework comprised the three main factors including: OC, trust and KSM, demonstrating a more integrated view of KS in the inter-project context. Accordingly, the aim of this research was to examine the relationships between these three factors and KS by investigating behaviours related to KS from the project managers‘ (PMs‘) perspective. In order to achieve the aim, this research sought to answer the following research questions: 1. How does organisational culture influence inter-project knowledge sharing? 2. How does the existence of three forms of trust — (i) ability, (ii) benevolence and (iii) integrity — influence inter-project knowledge sharing? 3. How can different knowledge sharing mechanisms (relational, project management tools and process, and technology) improve inter-project knowledge sharing behaviours? 4. How do the relationships between these three factors of organisational culture, trust and knowledge sharing mechanisms improve inter-project knowledge sharing? a. What are the relationships between the factors? b. What is the best fit for given cases to ensure more effective inter-project knowledge sharing? Using multiple case studies, this research was designed to build propositions emerging from cross-case data analysis. The four cases were chosen on the basis of theoretical sampling. All cases were large project-based organisations (PBOs), with a strong matrix-type structure, as per the typology proposed by the Project Management Body of Knowledge (PMBoK) (2008). Data were collected from project management departments of the respective organisations. A range of analytical techniques were used to deal with the data including pattern matching logic and explanation building analysis, complemented by the use of NVivo for data coding and management. Propositions generated at the end of the analyses were further compared with the extant literature, and practical implications based on the data and literature were suggested in order to improve I-PKS. Findings from this research conclude that OC, trust, and KSM contribute to inter-project knowledge sharing, and suggest the existence of relationships between these factors. In view of that, this research identified the relationships between different trust dimensions, suggesting that integrity trust reinforces the relationship between ability trust and knowledge sharing. Furthermore, this research demonstrated that characteristics of culture and trust interact to reinforce preferences for mechanisms of knowledge sharing. This means that cultures that facilitate characteristics of Clan type are more likely to result in trusting relationships, hence are more likely to use organic sources of knowledge for both tacit and explicit knowledge exchange. In contrast, cultures that are empirically driven, based on control, efficiency, and measures (characteristics of Hierarchy and Market types) display tendency to develop trust primarily in ability of non-organic sources, and therefore use these sources to share mainly explicit knowledge. This thesis contributes to the project management literature by providing a more integrative view of I-PKS, bringing the factors of OC, trust and KSM into the picture. A further contribution is related to the use of collaborative tools as a substitute for static LL databases and as a facilitator for tacit KS between geographically dispersed projects. This research adds to the literature on OC by providing rich empirical evidence of the relationships between OC and the willingness to share knowledge, and by providing empirical evidence that OC has an effect on trust; in doing so this research extends the theoretical propositions outlined by previous research. This study also extends the research on trust by identifying the relationships between different trust dimensions, suggesting that integrity trust reinforces the relationship between ability trust and KS. Finally, this research provides some directions for future studies.
Resumo:
This study evaluated the effect of eye muscle area (EMA), ossification, carcass weight, marbling and rib fat depth on the incidence of dark cutting (pH u > 5.7) using routinely collected Meat Standards Australia (MSA) data. Data was obtained from 204,072 carcasses at a Western Australian processor between 2002 and 2008. Binomial data of pH u compliance was analysed using a logit model in a Bayesian framework. Increasing eye muscle area from 40 to 80 cm 2, increased pH u compliance by around 14% (P < 0.001) in carcasses less than 350 kg. As carcass weight increased from 150 kg to 220 kg, compliance increased by 13% (P < 0.001) and younger cattle with lower ossification were also 7% more compliant (P < 0.001). As rib fat depth increased from 0 to 20 mm, pH u compliance increased by around 10% (P < 0.001) yet marbling had no effect on dark cutting. Increasing musculature and growth combined with good nutrition will minimise dark cutting beef in Australia.
Resumo:
Designing practical rules for controlling invasive species is a challenging task for managers, particularly when species are long-lived, have complex life cycles and high dispersal capacities. Previous findings derived from plant matrix population analyses suggest that effective control of long-lived invaders may be achieved by focusing on killing adult plants. However, the cost-effectiveness of managing different life stages has not been evaluated. We illustrate the benefits of integrating matrix population models with decision theory to undertake this evaluation, using empirical data from the largest infestation of mesquite (Leguminosae: Prosopis spp) within Australia. We include in our model the mesquite life cycle, different dispersal rates and control actions that target individuals at different life stages with varying costs, depending on the intensity of control effort. We then use stochastic dynamic programming to derive cost-effective control strategies that minimize the cost of controlling the core infestation locally below a density threshold and the future cost of control arising from infestation of adjacent areas via seed dispersal. Through sensitivity analysis, we show that four robust management rules guide the allocation of resources between mesquite life stages for this infestation: (i) When there is no seed dispersal, no action is required until density of adults exceeds the control threshold and then only control of adults is needed; (ii) when there is seed dispersal, control strategy is dependent on knowledge of the density of adults and large juveniles (LJ) and broad categories of dispersal rates only; (iii) if density of adults is higher than density of LJ, controlling adults is most cost-effective; (iv) alternatively, if density of LJ is equal or higher than density of adults, management efforts should be spread between adults, large and to a lesser extent small juveniles, but never saplings. Synthesis and applications.In this study, we show that simple rules can be found for managing invasive plants with complex life cycles and high dispersal rates when population models are combined with decision theory. In the case of our mesquite population, focussing effort on controlling adults is not always the most cost-effective way to meet our management objective.
Resumo:
This paper describes system identification, estimation and control of translational motion and heading angle for a cost effective open-source quadcopter — the MikroKopter. The dynamics of its built-in sensors, roll and pitch attitude controller, and system latencies are determined and used to design a computationally inexpensive multi-rate velocity estimator that fuses data from the built-in inertial sensors and a low-rate onboard laser range finder. Control is performed using a nested loop structure that is also computationally inexpensive and incorporates different sensors. Experimental results for the estimator and closed-loop positioning are presented and compared with ground truth from a motion capture system.
Resumo:
Privacy is an important component of freedom and plays a key role in protecting fundamental human rights. It is becoming increasingly difficult to ignore the fact that without appropriate levels of privacy, a person’s rights are diminished. Users want to protect their privacy - particularly in “privacy invasive” areas such as social networks. However, Social Network users seldom know how to protect their own privacy through online mechanisms. What is required is an emerging concept that provides users legitimate control over their own personal information, whilst preserving and maintaining the advantages of engaging with online services such as Social Networks. This paper reviews “Privacy by Design (PbD)” and shows how it applies to diverse privacy areas. Such an approach will move towards mitigating many of the privacy issues in online information systems and can be a potential pathway for protecting users’ personal information. The research has also posed many questions in need of further investigation for different open source distributed Social Networks. Findings from this research will lead to a novel distributed architecture that provides more transparent and accountable privacy for the users of online information systems.