970 resultados para Communication Methodology
Resumo:
The JoMeC Network project had three key objectives. These were to: 1. Benchmark the pedagogical elements of journalism, media and communication (JoMeC) programs at Australian universities in order to develop a set of minimum academic standards, to be known as Threshold Learning Outcomes (TLOs), which would applicable to the disciplines of Journalism, Communication and/or Media Studies, and Public Relations; 2. Build a learning and teaching network of scholars across the JoMeC disciplines to support collaboration, develop leadership potential among educators, and progress shared priorities; 3. Create an online resources hub to support learning and teaching excellence and foster leadership in learning and teaching in the JoMeC disciplines. In order to benchmark the pedagogical elements of the JoMeC disciplines, the project started with a comprehensive review of the disciplinary settings of journalism, media and communication-related programs within Higher Education in Australia plus an analysis of capstone units (or subjects) offered in JoMeC-related degrees. This audit revealed a diversity of degree titles, disciplinary foci, projected career outcomes and pedagogical styles in the 36 universities that offered JoMeC-related degrees in 2012, highlighting the difficulties of classifying the JoMeC disciplines collectively or singularly. Instead of attempting to map all disciplines related to journalism, media and communication, the project team opted to create generalised TLOs for these fields, coupled with detailed TLOs for bachelor-level qualifications in three selected JoMeC disciplines: Journalism, Communication and/or Media Studies, and Public Relations. The initial review’s outcomes shaped the methodology that was used to develop the TLOs. Given the complexity of the JoMeC disciplines and the diversity of degrees across the network, the project team deployed an issue-framing process to create TLO statements. This involved several phases, including discussions with an issue-framing team (an advisory group of representatives from different disciplinary areas); research into accreditation requirements and industry-produced materials about employment expectations; evaluation of learning outcomes from universities across Australia; reviews of scholarly literature; as well as input from disciplinary leaders in a variety of forms. Draft TLOs were refined after further consultation with industry stakeholders and the academic community via email, telephone interviews, and meetings and public forums at conferences. This process was used to create a set of common TLOs for JoMeC disciplines in general and extended TLO statements for the specific disciplines of Journalism and Public Relations. A TLO statement for Communication and/or Media Studies remains in draft form. The Australian and New Zealand Communication Association (ANZCA) and Journalism Education and Research Association of Australian (JERAA) have agreed to host meetings to review, revise and further develop the TLOs. The aim is to support the JoMeC Network’s sustainability and the TLOs’ future development and use.
Resumo:
Purpose The paper examines the concept of silent communication and its implications in marketing communication. It defines silent communication and proposes an analytic framework enabling an expanded view of marketing communication. Design/methodology/approach By explicitly adopting a customer-oriented perspective, combined with insights from service marketing and relationship communication, the paper extends current models of marketing communication. Findings The paper identifies different types of silent communication and presents new perspectives on marketing communication. The authors outline a framework for understanding how the company can/cannot control different forms of marketing communication and discuss the implications of this. Research implications/limitations The paper concentrates on a conceptual analysis, offering a number of empirical illustrations. The conceptual development creates new research issues that should lead to a deeper understanding of customers’ meaning creation, actions and reactions. Practical implications Silent communication constitutes a managerial challenge as it is often invisible to the management. The paper points to the need to develop methods to reveal the effects of silent communication as well as create guidelines for managerially handling silent communication. Originality/value The customer-based perspective and the focus on silent communication provide a completely new approach to analysing and understanding marketing communication. The paper contributes to service marketing and marketing communication research by introducing conceptualisations of silent communication that have an interest for both academic research and practitioners.
Resumo:
Synthesis of enterolactone, the first lignan of human origin, starting from 3-methoxycinnamyl alcohol employing a 5-exo-trig radical cyclisation reaction of mixed bromoacetal as the key step is described.
Resumo:
An enantiospecific formal total synthesis of (-)-ceratopicanol starting from the readily and abundantly available monoterpene (R)-limonene is described. A combination of Claisen rearrangement-intramolecular diazo-ketone cyclopropanation-regiospecific reductive cyclopropane cleavage reactions are employed for the stereo- and regiospecific generation of the two vicinal ring junction quaternary carbon atoms.
Resumo:
The advent and evolution of geohazard warning systems is a very interesting study. The two broad fields that are immediately visible are that of geohazard evaluation and subsequent warning dissemination. Evidently, the latter field lacks any systematic study or standards. Arbitrarily organized and vague data and information on warning techniques create confusion and indecision. The purpose of this review is to try and systematize the available bulk of information on warning systems so that meaningful insights can be derived through decidable flowcharts, and a developmental process can be undertaken. Hence, the methods and technologies for numerous geohazard warning systems have been assessed by putting them into suitable categories for better understanding of possible ways to analyze their efficacy as well as shortcomings. By establishing a classification scheme based on extent, control, time period, and advancements in technology, the geohazard warning systems available in any literature could be comprehensively analyzed and evaluated. Although major advancements have taken place in geohazard warning systems in recent times, they have been lacking a complete purpose. Some systems just assess the hazard and wait for other means to communicate, and some are designed only for communication and wait for the hazard information to be provided, which usually is after the mishap. Primarily, systems are left at the mercy of administrators and service providers and are not in real time. An integrated hazard evaluation and warning dissemination system could solve this problem. Warning systems have also suffered from complexity of nature, requirement of expert-level monitoring, extensive and dedicated infrastructural setups, and so on. The user community, which would greatly appreciate having a convenient, fast, and generalized warning methodology, is surveyed in this review. The review concludes with the future scope of research in the field of hazard warning systems and some suggestions for developing an efficient mechanism toward the development of an automated integrated geohazard warning system. DOI: 10.1061/(ASCE)NH.1527-6996.0000078. (C) 2012 American Society of Civil Engineers.
Resumo:
A comprehensive design flow is proposed for the design of Micro Electro Mechanical Systems that are fabricated using SOIMUMPs process. Many of the designers typically do not model the temperature dependency of electrical conductivity, thermal conductivity and convection coefficient, as it is very cumbersome to create/incorporate the same in the existing FEM simulators. Capturing these dependencies is very critical particularly for structures that are electrically actuated. Lookup tables that capture the temperature dependency of electrical conductivity, thermal conductivity and convection coefficient are created. These look up tables are taken as inputs for a commercially available FEM simulator to model the semiconductor behavior. It is demonstrated that when temperature dependency for all the above mentioned parameters is not captured, then the error in estimation of the maximum temperature (for a given structure) could be as high as 30%. Error in the estimated resistance value under the same conditions is as high as 40%. When temperature dependency of the above mentioned parameters is considered then error w.r.t the measured values is less than 5%. It is evident that error in temperature estimates leads to erroneous results from mechanical simulations as well.
Resumo:
The scalability of CMOS technology has driven computation into a diverse range of applications across the power consumption, performance and size spectra. Communication is a necessary adjunct to computation, and whether this is to push data from node-to-node in a high-performance computing cluster or from the receiver of wireless link to a neural stimulator in a biomedical implant, interconnect can take up a significant portion of the overall system power budget. Although a single interconnect methodology cannot address such a broad range of systems efficiently, there are a number of key design concepts that enable good interconnect design in the age of highly-scaled CMOS: an emphasis on highly-digital approaches to solving ‘analog’ problems, hardware sharing between links as well as between different functions (such as equalization and synchronization) in the same link, and adaptive hardware that changes its operating parameters to mitigate not only variation in the fabrication of the link, but also link conditions that change over time. These concepts are demonstrated through the use of two design examples, at the extremes of the power and performance spectra.
A novel all-digital clock and data recovery technique for high-performance, high density interconnect has been developed. Two independently adjustable clock phases are generated from a delay line calibrated to 2 UI. One clock phase is placed in the middle of the eye to recover the data, while the other is swept across the delay line. The samples produced by the two clocks are compared to generate eye information, which is used to determine the best phase for data recovery. The functions of the two clocks are swapped after the data phase is updated; this ping-pong action allows an infinite delay range without the use of a PLL or DLL. The scheme's generalized sampling and retiming architecture is used in a sharing technique that saves power and area in high-density interconnect. The eye information generated is also useful for tuning an adaptive equalizer, circumventing the need for dedicated adaptation hardware.
On the other side of the performance/power spectra, a capacitive proximity interconnect has been developed to support 3D integration of biomedical implants. In order to integrate more functionality while staying within size limits, implant electronics can be embedded onto a foldable parylene (‘origami’) substrate. Many of the ICs in an origami implant will be placed face-to-face with each other, so wireless proximity interconnect can be used to increase communication density while decreasing implant size, as well as facilitate a modular approach to implant design, where pre-fabricated parylene-and-IC modules are assembled together on-demand to make custom implants. Such an interconnect needs to be able to sense and adapt to changes in alignment. The proposed array uses a TDC-like structure to realize both communication and alignment sensing within the same set of plates, increasing communication density and eliminating the need to infer link quality from a separate alignment block. In order to distinguish the communication plates from the nearby ground plane, a stimulus is applied to the transmitter plate, which is rectified at the receiver to bias a delay generation block. This delay is in turn converted into a digital word using a TDC, providing alignment information.
Resumo:
This paper proposes a design methodology to stabilize relative equilibria in a model of identical, steered particles moving in the plane at unit speed. Relative equilibria either correspond to parallel motion of all particles with fixed relative spacing or to circular motion of all particles around the same circle. Particles exchange relative information according to a communication graph that can be undirected or directed and time-invariant or time-varying. The emphasis of this paper is to show how previous results assuming all-to-all communication can be extended to a general communication framework. © 2008 IEEE.
Resumo:
This paper proposes a design methodology to stabilize isolated relative equilibria in a model of all-to-all coupled identical particles moving in the plane at unit speed. Isolated relative equilibria correspond to either parallel motion of all particles with fixed relative spacing or to circular motion of all particles with fixed relative phases. The stabilizing feedbacks derive from Lyapunov functions that prove exponential stability and suggest almost global convergence properties. The results of the paper provide a low-order parametric family of stabilizable collectives that offer a set of primitives for the design of higher-level tasks at the group level. © 2007 IEEE.
Resumo:
As new multi-party edge services are deployed on the Internet, application-layer protocols with complex communication models and event dependencies are increasingly being specified and adopted. To ensure that such protocols (and compositions thereof with existing protocols) do not result in undesirable behaviors (e.g., livelocks) there needs to be a methodology for the automated checking of the "safety" of these protocols. In this paper, we present ingredients of such a methodology. Specifically, we show how SPIN, a tool from the formal systems verification community, can be used to quickly identify problematic behaviors of application-layer protocols with non-trivial communication models—such as HTTP with the addition of the "100 Continue" mechanism. As a case study, we examine several versions of the specification for the Continue mechanism; our experiments mechanically uncovered multi-version interoperability problems, including some which motivated revisions of HTTP/1.1 and some which persist even with the current version of the protocol. One such problem resembles a classic degradation-of-service attack, but can arise between well-meaning peers. We also discuss how the methods we employ can be used to make explicit the requirements for hardening a protocol's implementation against potentially malicious peers, and for verifying an implementation's interoperability with the full range of allowable peer behaviors.
Resumo:
A methodology for improved power controller switching in mobile Body Area Networks operating within the ambient healthcare environment is proposed. The work extends Anti-windup and Bumpless transfer results to provide a solution to the ambulatory networking problem that ensures sufficient biometric data can always be regenerated at the base station. The solution thereby guarantees satisfactory quality of service for healthcare providers. Compensation is provided for the nonlinear hardware constraints that are a typical feature of the type of network under consideration and graceful performance degradation in the face of hardware output power saturation is demonstrated, thus conserving network energy in an optimal fashion.
Resumo:
Purpose – Research into the communication skills of individuals with Cornelia de Lange syndrome (CdLS) is extremely limited. This paper aims to evaluate the nature of these skills and impairments in CdLS using a detailed informant assessment of pre-verbal communication skills.
Design/methodology/approach – The study used the Pre-verbal Communication Schedule to evaluate communication skills in individuals with CdLS (n ¼ 14), aged five to14 years. The group was compared with a contrast group of individuals with Cri du Chat syndrome (CdCS; n ¼ 14) who were matched for age and intellectual ability.
Findings – A significant difference was identified in understanding non-vocal communication (p , 0.005), with the CdLS group showing a greater deficit. These findings indicate the presence of a syndrome-specific deficit in understanding non-verbal communication in individuals with CdLS and suggest that there may be a dissociation between the processing of verbal and non-verbal communication.
Originality/value – The findings indicate that, in many ways, these two syndrome groups are not dissimilar in terms of their communication skills. However, individuals with CdLS show a syndrome-specific deficit in understanding non-vocal communication relative to the CdCS group.
Resumo:
Plasma etch is a key process in modern semiconductor manufacturing facilities as it offers process simplification and yet greater dimensional tolerances compared to wet chemical etch technology. The main challenge of operating plasma etchers is to maintain a consistent etch rate spatially and temporally for a given wafer and for successive wafers processed in the same etch tool. Etch rate measurements require expensive metrology steps and therefore in general only limited sampling is performed. Furthermore, the results of measurements are not accessible in real-time, limiting the options for run-to-run control. This paper investigates a Virtual Metrology (VM) enabled Dynamic Sampling (DS) methodology as an alternative paradigm for balancing the need to reduce costly metrology with the need to measure more frequently and in a timely fashion to enable wafer-to-wafer control. Using a Gaussian Process Regression (GPR) VM model for etch rate estimation of a plasma etch process, the proposed dynamic sampling methodology is demonstrated and evaluated for a number of different predictive dynamic sampling rules. © 2013 IEEE.
Resumo:
The X-parameter based nonlinear modelling tools have been adopted as the foundation for the advanced methodology
of experimental characterisation and design of passive nonlinear devices. Based upon the formalism of the Xparameters,
it provides a unified framework for co-design of antenna beamforming networks, filters, phase shifters and
other passive and active devices of RF front-end, taking into account the effect of their nonlinearities. The equivalent
circuits of the canonical elements are readily incorporated in the models, thus enabling evaluation of PIM effect on the
performance of individual devices and their assemblies. An important advantage of the presented methodology is its
compatibility with the industry-standard established commercial RF circuit simulator Agilent ADS.
The major challenge in practical implementation of the proposed approach is concerned with experimental retrieval of the X-parameters for canonical passive circuit elements. To our best knowledge commercial PIM testers and practical laboratory test instruments are inherently narrowband and do not allow for simultaneous vector measurements at the PIM and harmonic frequencies. Alternatively, existing nonlinear vector analysers (NVNA) support X-parameter measurements in a broad frequency bands with a range of stimuli, but their dynamic range is insufficient for the PIM characterisation in practical circuits. Further opportunities for adaptation of the X-parameters methodology to the PIM
characterisation of passive devices using the existing test instruments are explored.
Resumo:
The increasing and intensive integration of distributed energy resources into distribution systems requires adequate methodologies to ensure a secure operation according to the smart grid paradigm. In this context, SCADA (Supervisory Control and Data Acquisition) systems are an essential infrastructure. This paper presents a conceptual design of a communication and resources management scheme based on an intelligent SCADA with a decentralized, flexible, and intelligent approach, adaptive to the context (context awareness). The methodology is used to support the energy resource management considering all the involved costs, power flows, and electricity prices leading to the network reconfiguration. The methodology also addresses the definition of the information access permissions of each player to each resource. The paper includes a 33-bus network used in a case study that considers an intensive use of distributed energy resources in five distinct implemented operation contexts.