989 resultados para network analyzer measurement
Resumo:
Työssä tutkitaan telepäätelaitteen yli gigahertsin taajuisen säteilevän RF kentän sietoisuutta. Mittauksissa testattava laite on Tellabs Oy:n valmistaman CTU modeemin tuotekehitysversio. Teoriaosassa käydään läpi sähkömagneettisten aaltojen teoriaa, sekä säteilevän RF kentän aiheuttamien sähkömagneettiset häiriöiden syntymekanismeja. Myös säteilevien häiriöiden EMC mittauksiin tarvittavien mittalaitteiden tärkeimmät ominaisuudet esitellään, sekä pohditaan yli gigahertsin taajuuksille sopivien EMC mittalaitteiden vaatimuksia. EMC standardit eivät tällä hetkellä aseta vaatimuksia telelaitteiden RF kentän sietoisuudelle yli gigahertsin taajuudella. Tämän vuoksi työssä käsitellään myös todennäköisimpiä häiriölähteitä tällä taajuusalueella. Mittauksissa tutkittiin CTU:n RF kentän sietoisuutta taajuusalueella l - 4.2 GHz. Mittaukset suoritettiin sekä radiokaiuttomassa kammiossa että GTEM solussa. Myös metallisten lisäsuojien vaikutusta CTU:n kentänsietoisuuteen tutkittiin GTEM solussa.
Resumo:
In the literature survey retention mechanisms, factors effecting retention and microparticles were studied. Also commercial microparticle retention systems and means to measure retention were studied. Optical retention measurement with RPA and Lasentec FBRM was studied. The experimental part contains study of different cationic polyacrylamides, anionic silica, bentonite and new generation micropolymer. In these studies the dosage, dosing order and dosing history were changing factors. The experimental work was done with RPA-apparatus with which, the retention process can be followed in real time. In testing was found that silica yielded better retention, when dosed nontraditionally before the polymer. Also silica was very dependant on the polymer dosage. With bentonite good colloidal retention was achieved with relatively low doses. Unlike silica bentonite was not dependant on polymer dosage. The relation of bentonite and polymer dosage is more defining when high retention is wanted. With 3-component systems using bentonite very high retention was achieved. With silica no improvement in retention was found in 3-component systems compared to dual component systems.
Resumo:
In recent years, the network vulnerability to natural hazards has been noticed. Moreover, operating on the limits of the network transmission capabilities have resulted in major outages during the past decade. One of the reasons for operating on these limits is that the network has become outdated. Therefore, new technical solutions are studied that could provide more reliable and more energy efficient power distributionand also a better profitability for the network owner. It is the development and price of power electronics that have made the DC distribution an attractive alternative again. In this doctoral thesis, one type of a low-voltage DC distribution system is investigated. Morespecifically, it is studied which current technological solutions, used at the customer-end, could provide better power quality for the customer when compared with the current system. To study the effect of a DC network on the customer-end power quality, a bipolar DC network model is derived. The model can also be used to identify the supply parameters when the V/kW ratio is approximately known. Although the model provides knowledge of the average behavior, it is shown that the instantaneous DC voltage ripple should be limited. The guidelines to choose an appropriate capacitance value for the capacitor located at the input DC terminals of the customer-end are given. Also the structure of the customer-end is considered. A comparison between the most common solutions is made based on their cost, energy efficiency, and reliability. In the comparison, special attention is paid to the passive filtering solutions since the filter is considered a crucial element when the lifetime expenses are determined. It is found out that the filter topology most commonly used today, namely the LC filter, does not provide economical advantage over the hybrid filter structure. Finally, some of the typical control system solutions are introduced and their shortcomings are presented. As a solution to the customer-end voltage regulation problem, an observer-based control scheme is proposed. It is shown how different control system structures affect the performance. The performance meeting the requirements is achieved by using only one output measurement, when operating in a rigid network. Similar performance can be achieved in a weak grid by DC voltage measurement. An additional improvement can be achieved when an adaptive gain scheduling-based control is introduced. As a conclusion, the final power quality is determined by a sum of various factors, and the thesis provides the guidelines for designing the system that improves the power quality experienced by the customer.
Resumo:
More and more innovations currently being commercialized exhibit network effects, in other words, the value of using the product increases as more and more people use the same or compatible products. Although this phenomenon has been the subject of much theoretical debate in economics, marketing researchers have been slow to respond to the growing importance of network effects in new product success. Despite an increase in interest in recent years, there is no comprehensive view on the phenomenon and, therefore, there is currently incomplete understanding of the dimensions it incorporates. Furthermore, there is wide dispersion in operationalization, in other words, the measurement of network effects, and currently available approaches have various shortcomings that limit their applicability, especially in marketing research. Consequently, little is known today about how these products fare on the marketplace and how they should be introduced in order to maximize their chances of success. Hence, the motivation for this study was driven by the need to increase our knowledge and understanding of the nature of network effects as a phenomenon, and of their role in the commercial success of new products. This thesis consists of two parts. The first part comprises a theoretical overview of the relevant literature, and presents the conclusions of the entire study. The second part comprises five complementary, empirical research publications. Quantitative research methods and two sets of quantitative data are utilized. The results of the study suggest that there is a need to update both the conceptualization and the operationalization of the phenomenon of network effects. Furthermore, there is a need for an augmented view on customers’ perceived value in the context of network effects, given that the nature of value composition has major implications for the viability of such products in the marketplace. The role of network effects in new product performance is not as straightforward as suggested in the existing theoretical literature. The overwhelming result of this study is that network effects do not directly influence product success, but rather enhance or suppress the influence of product introduction strategies. The major contribution of this study is in conceptualizing the phenomenon of network effects more comprehensively than has been attempted thus far. The study gives an augmented view of the nature of customer value in network markets, which helps in explaining why some products thrive on these markets whereas others never catch on. Second, the study discusses shortcomings in prior literature in the way it has operationalized network effects, suggesting that these limitations can be overcome in the research design. Third, the study provides some much-needed empirical evidence on how network effects, product introduction strategies, and new product performance are associated. In general terms, this thesis adds to our knowledge of how firms can successfully leverage network effects in product commercialization in order to improve market performance.
Resumo:
Value network has been studied greatly in the academic research, but a tool for value network mapping is missing. The objective of this study was to design a tool (process) for value network mapping in cross-sector collaboration. Furthermore, the study addressed a future perspective of collaboration, aiming to map the value network potential. During the study was investigated and pondered how to get the full potential of collaboration, by creating new value in collaboration process. These actions are parts of mapping process proposed in the study. The implementation and testing of the mapping process were realized through a case study of cross-sector collaboration in welfare services for elderly in the Eastern Finland. Key representatives in elderly care from public, private and third sectors were interviewed and a workshop with experts from every sector was also conducted in this regard. The value network mapping process designed in this study consists of specific steps that help managers and experts to understand how to get a complex value network map and how to enhance it. Furthermore, it make easier the understanding of how new value can be created in collaboration process. The map can be used in order to motivate participants to be engaged with responsibility in collaboration and to be fully committed in their interactions. It can be also used as a motivator tool for those organizations that intend to engage in collaboration process. Additionally, value network map is a starting point in many value network analyses. Furthermore, the enhanced value network map can be used as a performance measurement tool in cross-sector collaboration.
Resumo:
The starting point of this study is to direct more attention to the teacher and those entrepreneurship education practices taking place in formal school to find out solutions for more effective promotion of entrepreneurship education. For this objective, the strategy-level aims of entrepreneurship education need to be operationalised into measurable and understandable teacher-level practices. Furthermore, to enable the effective development of entrepreneurship education in basic and upper secondary level education, more knowledge is needed of the state of affairs of entrepreneurship education in teaching. The purpose of the study is to increase the level of understanding of teachers’ entrepreneurship education practices, and through this to develop entrepreneurship education. This study builds on the literature on entrepreneurship education and especially those elements referring to the aims, resources, benefits, methods, and practises of entrepreneurship education. The study comprises five articles highlighting teachers’ role in entrepreneurship education. In the first article the concept of entrepreneurship and the teachers role in reflection upon his/hers approaches to entrepreneurship education are considered. The second article provides a detailed analysis of the process of developing a measurement tool to depict the teachers’ activities in entrepreneurship education. The next three articles highlight the teachers’ role in directing the entrepreneurship education in basic and upper secondary level education. Furthermore, they analyse the relationship between the entrepreneurship education practises and the teachers’ background characteristics. The results of the study suggest a wide range of conclusions and implications. First, in spite of many outspoken aims connected to entrepreneurship education, teachers have not set any aims for themselves. Additionally, aims and results seem to mix. However, it is possible to develop teachers’ target orientation by supporting their reflection skills, and through measurement and evaluation increase their understanding of their own practices. Second, applying a participatory action process it is possible to operationalise teachers’entrepreneurship education practices. It is central to include the practitioners’ perspective in the development of measures to make sure that the concepts and aims of entrepreneurship education are understood. Third, teachers’ demographic or tenure-related background characteristics do not affect their entrepreneurship education practices, but their training related to entrepreneurship education, participation in different school-level or regional planning, and their own capabilities support entrepreneurship education. Fourth, a large number of methods are applied to entrepreneurship education, and the most often used methods were different kinds of discussions, which seem to be an easy, low-threshold way for teachers to include entrepreneurship education regularly in their teaching. Field trips to business enterprises or inviting entrepreneurs to present their work in schools are used fairly seldom. Interestingly, visits outside the school are more common than visitors invited to the school. In line, most of the entrepreneurship education practices take place in a classroom. Therefore it seems to be useful to create and encourage teachers towards more in-depth cooperation with companies (e.g. via joint projects) and to network systematically. Finally, there are plenty of resources available for entrepreneurship education, such as ready-made materials, external stakeholders, support organisations, and learning games, but teachers have utilized them only marginally.
Resumo:
Plants and some other organisms including protists possess a complex branched respiratory network in their mitochondria. Some pathways of this network are not energy-conserving and allow sites of energy conservation to be bypassed, leading to a decrease of the energy yield in the cells. It is a challenge to understand the regulation of the partitioning of electrons between the various energy-dissipating and -conserving pathways. This review is focused on the oxidase side of the respiratory chain that presents a cyanide-resistant energy-dissipating alternative oxidase (AOX) besides the cytochrome pathway. The known structural properties of AOX are described including transmembrane topology, dimerization, and active sites. Regulation of the alternative oxidase activity is presented in detail because of its complexity. The alternative oxidase activity is dependent on substrate availability: total ubiquinone concentration and its redox state in the membrane and O2 concentration in the cell. The alternative oxidase activity can be long-term regulated (gene expression) or short-term (post-translational modification, allosteric activation) regulated. Electron distribution (partitioning) between the alternative and cytochrome pathways during steady-state respiration is a crucial measurement to quantitatively analyze the effects of the various levels of regulation of the alternative oxidase. Three approaches are described with their specific domain of application and limitations: kinetic approach, oxygen isotope differential discrimination, and ADP/O method (thermokinetic approach). Lastly, the role of the alternative oxidase in non-thermogenic tissues is discussed in relation to the energy metabolism balance of the cell (supply in reducing equivalents/demand in energy and carbon) and with harmful reactive oxygen species formation.
Resumo:
En opération depuis 2008, l’expérience ATLAS est la plus grande de toutes les expériences au LHC. Les détecteurs ATLAS- MPX (MPX) installés dans ATLAS sont basés sur le détecteur au silicium à pixels Medipix2 qui a été développé par la collaboration Medipix au CERN pour faire de l’imagerie en temps réel. Les détecteurs MPX peuvent être utilisés pour mesurer la luminosité. Ils ont été installés à seize différents endroits dans les zones expérimentale et technique d’ATLAS en 2008. Le réseau MPX a recueilli avec succès des données indépendamment de la chaîne d’enregistrement des données ATLAS de 2008 à 2013. Chaque détecteur MPX fournit des mesures de la luminosité intégrée du LHC. Ce mémoire décrit la méthode d’étalonnage de la luminosité absolue mesurée avec les détectors MPX et la performance des détecteurs MPX pour les données de luminosité en 2012. Une constante d’étalonnage de la luminosité a été déterminée. L’étalonnage est basé sur technique de van der Meer (vdM). Cette technique permet la mesure de la taille des deux faisceaux en recouvrement dans le plan vertical et horizontal au point d’interaction d’ATLAS (IP1). La détermination de la luminosité absolue nécessite la connaissance précise de l’intensité des faisceaux et du nombre de trains de particules. Les trois balayages d’étalonnage ont été analysés et les résultats obtenus par les détecteurs MPX ont été comparés aux autres détecteurs d’ATLAS dédiés spécifiquement à la mesure de la luminosité. La luminosité obtenue à partir des balayages vdM a été comparée à la luminosité des collisions proton- proton avant et après les balayages vdM. Le réseau des détecteurs MPX donne des informations fiables pour la détermination de la luminosité de l’expérience ATLAS sur un large intervalle (luminosité de 5 × 10^29 cm−2 s−1 jusqu’à 7 × 10^33 cm−2 s−1 .
Resumo:
Our goal in this paper is to assess reliability and validity of egocentered network data using multilevel analysis (Muthen, 1989, Hox, 1993) under the multitrait-multimethod approach. The confirmatory factor analysis model for multitrait-multimethod data (Werts & Linn, 1970; Andrews, 1984) is used for our analyses. In this study we reanalyse a part of data of another study (Kogovšek et al., 2002) done on a representative sample of the inhabitants of Ljubljana. The traits used in our article are the name interpreters. We consider egocentered network data as hierarchical; therefore a multilevel analysis is required. We use Muthen's partial maximum likelihood approach, called pseudobalanced solution (Muthen, 1989, 1990, 1994) which produces estimations close to maximum likelihood for large ego sample sizes (Hox & Mass, 2001). Several analyses will be done in order to compare this multilevel analysis to classic methods of analysis such as the ones made in Kogovšek et al. (2002), who analysed the data only at group (ego) level considering averages of all alters within the ego. We show that some of the results obtained by classic methods are biased and that multilevel analysis provides more detailed information that much enriches the interpretation of reliability and validity of hierarchical data. Within and between-ego reliabilities and validities and other related quality measures are defined, computed and interpreted
Resumo:
Compositional data, also called multiplicative ipsative data, are common in survey research instruments in areas such as time use, budget expenditure and social networks. Compositional data are usually expressed as proportions of a total, whose sum can only be 1. Owing to their constrained nature, statistical analysis in general, and estimation of measurement quality with a confirmatory factor analysis model for multitrait-multimethod (MTMM) designs in particular are challenging tasks. Compositional data are highly non-normal, as they range within the 0-1 interval. One component can only increase if some other(s) decrease, which results in spurious negative correlations among components which cannot be accounted for by the MTMM model parameters. In this article we show how researchers can use the correlated uniqueness model for MTMM designs in order to evaluate measurement quality of compositional indicators. We suggest using the additive log ratio transformation of the data, discuss several approaches to deal with zero components and explain how the interpretation of MTMM designs di ers from the application to standard unconstrained data. We show an illustration of the method on data of social network composition expressed in percentages of partner, family, friends and other members in which we conclude that the faceto-face collection mode is generally superior to the telephone mode, although primacy e ects are higher in the face-to-face mode. Compositions of strong ties (such as partner) are measured with higher quality than those of weaker ties (such as other network members)
Resumo:
Cloud optical depth is one of the most poorly observed climate variables. The new “cloud mode” capability in the Aerosol Robotic Network (AERONET) will inexpensively yet dramatically increase cloud optical depth observations in both number and accuracy. Cloud mode optical depth retrievals from AERONET were evaluated at the Atmospheric Radiation Measurement program’s Oklahoma site in sky conditions ranging from broken clouds to overcast. For overcast cases, the 1.5 min average AERONET cloud mode optical depths agreed to within 15% of those from a standard ground‐based flux method. For broken cloud cases, AERONET retrievals also captured rapid variations detected by the microwave radiometer. For 3 year climatology derived from all nonprecipitating clouds, AERONET monthly mean cloud optical depths are generally larger than cloud radar retrievals because of the current cloud mode observation strategy that is biased toward measurements of optically thick clouds. This study has demonstrated a new way to enhance the existing AERONET infrastructure to observe cloud optical properties on a global scale.
Resumo:
We report on the first realtime ionospheric predictions network and its capabilities to ingest a global database and forecast F-layer characteristics and "in situ" electron densities along the track of an orbiting spacecraft. A global network of ionosonde stations reported around-the-clock observations of F-region heights and densities, and an on-line library of models provided forecasting capabilities. Each model was tested against the incoming data; relative accuracies were intercompared to determine the best overall fit to the prevailing conditions; and the best-fit model was used to predict ionospheric conditions on an orbit-to-orbit basis for the 12-hour period following a twice-daily model test and validation procedure. It was found that the best-fit model often provided averaged (i.e., climatologically-based) accuracies better than 5% in predicting the heights and critical frequencies of the F-region peaks in the latitudinal domain of the TSS-1R flight path. There was a sharp contrast however, in model-measurement comparisons involving predictions of actual, unaveraged, along-track densities at the 295 km orbital altitude of TSS-1R In this case, extrema in the first-principle models varied by as much as an order of magnitude in density predictions, and the best-fit models were found to disagree with the "in situ" observations of Ne by as much as 140%. The discrepancies are interpreted as a manifestation of difficulties in accurately and self-consistently modeling the external controls of solar and magnetospheric inputs and the spatial and temporal variabilities in electric fields, thermospheric winds, plasmaspheric fluxes, and chemistry.
Resumo:
The urban heat island is a well-known phenomenon that impacts a wide variety of city operations. With greater availability of cheap meteorological sensors, it is possible to measure the spatial patterns of urban atmospheric characteristics with greater resolution. To develop robust and resilient networks, recognizing sensors may malfunction, it is important to know when measurement points are providing additional information and also the minimum number of sensors needed to provide spatial information for particular applications. Here we consider the example of temperature data, and the urban heat island, through analysis of a network of sensors in the Tokyo metropolitan area (Extended METROS). The effect of reducing observation points from an existing meteorological measurement network is considered, using random sampling and sampling with clustering. The results indicated the sampling with hierarchical clustering can yield similar temperature patterns with up to a 30% reduction in measurement sites in Tokyo. The methods presented have broader utility in evaluating the robustness and resilience of existing urban temperature networks and in how networks can be enhanced by new mobile and open data sources.
Resumo:
The increase in the importance of intangibles in business competitiveness has made investment selection more challenging to investors that, under high information asymmetry, tend to charge higher premiums to provide capital or simply deny it. Private Equity and Venture Capital (PE/VC) organizations developed contemporarily with the increase in the relevance of intangible assets in the economy. They form a specialized breed of financial intermediaries that are better prepared to deal with information asymmetry. This paper is the result of ten interviews with PE/VC organizations in Brazil. Its objective is to describe the selection process, criteria and indicators used by these organizations to identify and measure intangible assets, as well as the methods used to valuate prospective investments. Results show that PE/VC organizations rely on sophisticated methods to assess investment proposals, with specific criteria and indicators to assess the main classes of intangible assets. However, no value is given to these assets individually. The information gathered is used to understand the sources of cash flows and risks, which are then combined by discounted cash flow methods to estimate firm's value. Due to PE/VC organizations extensive experience with innovative Small and Medium-sized Enterprises (SMEs), we believe that shedding light on how PE/VC organizations deal with intangible assets brings important insights to the intangible assets debate.