80 resultados para Building extraction
Resumo:
This final project was made for the Broadband department of TeliaSonera. This project gives an overview on how internet service provider might build an access network so that they can offer triple-play services. It also gives information on what equipment is needed and what is required from the access, aggregation and edge networks. The project starts by describing the triple-play service. Then it moves on to optical fiber cables, the network technology and network architecture. At the end of the project there is an example of the process and construction of the access network. It will give an overview of the total process and problems that a network planner might face during the planning phase of the project. It will give some indication on how one area is built from the start to finish. The conclusion of the project presents some points that must be taken into consideration when building an access network. The building of an access network has to be divided to a time span of eight to ten years, where one year is one phase in the project. One phase is divided into three parts; Selecting the areas and targets, Planning the areas and targets, and Documentation. The example area gives indication on the planning of an area. It is almost impossible to connect all targets at the same time. This means that the service provider has to complete the construction in two or three parts. The area is considered to be complete when more than 80% of the real estates have fiber.
Resumo:
Today’s commercial web sites are under heavy user load and they are expected to be operational and available at all times. Distributed system architectures have been developed to provide a scalable and failure tolerant high availability platform for these web based services. The focus on this thesis was to specify and implement resilient and scalable locally distributed high availability system architecture for a web based service. Theory part concentrates on the fundamental characteristics of distributed systems and presents common scalable high availability server architectures that are used in web based services. In the practical part of the thesis the implemented new system architecture is explained. Practical part also includes two different test cases that were done to test the system's performance capacity.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
The Kenyan forestry and sawmilling industry have been subject to a changing environment since 1999 when the industrial forest plantations were closed down. This has lowered raw material supply and it has affected and reduced the sawmill operations and the viability of the sawmill enterprises. The capacity of the 276 registered sawmills is not sufficient to fulfill sawn timber demand in Kenya. This is because of the technological degradation and lack of a qualified labor force, which were caused because of non-existent sawmilling education and further training in Kenya. Lack of competent sawmill workers has led to low raw material recovery, under utilization of resources and loss of employment. The objective of the work was to suggest models, methods and approaches for the competence and capacity development of the Kenyan sawmilling industry, sawmills and their workers. A nationwide field survey, interviews, questionnaire and literature review was used for data collection to find out the sawmills’ competence development areas and to suggest models and methods for their capacity building. The sampling frame included 22 sawmills that represented 72,5% of all the registered sawmills in Kenya. The results confirmed that the sawmills’ technological level was backwards, productivity low, raw material recovery unacceptable and workers’ professional education low. The future challenges will be how to establish the sawmills’ capacity building and workers’ competence development. Sawmilling industry development requires various actions through new development models and approaches. Activities should be started for technological development and workers’ competence development. This requires re-starting of vocational training in sawmilling and the establishment of more effective co-operation between the sawmills and their stakeholder groups. In competence development the Enterprise Competence Management Model of Nurminen (2007) can be used, whereas the best training model and approach would be a practically oriented learning at work model in which the short courses, technical assistance and extension services would be the key functions.
Resumo:
Scientific studies regarding specifically references do not seem to exist. However, the utilization of references is an important practice for many companies involved in industrial marketing. The purpose of the study is to increase the understanding about the utilization of references in international industrial marketing in order to contribute to the development of a theory of reference behavior. Specifically, the modes of reference usage in industry, the factors affecting a supplier's reference behavior, and the question how references are actually utilized, are explored in the study. Due to the explorative nature of the study, a research design was followed where theory and empirical studies alternated. An Exploratory Framework was developed to guide a pilot case study that resulted in Framework 1. Results of the pilot study guided an expanded literature review that was used to develop first a Structural Framework and a Process Framework which were combined in Framework 2. Then, the second empirical phase of the case study was conducted in the same (pilot) case company. In this phase, Decision Systems Analysis (DSA) was used as the analysis method. The DSA procedure consists of three interviewing waves: initial interviews, reinterviews, and validating interviews. Four reference decision processes were identified, described and analyzed in the form of flowchart descriptions. The flowchart descriptions were used to explore new constructs and to develop new propositions to develop Framework 2 further. The quality of the study was ascertained by many actions in both empirical parts of the study. The construct validity of the study was ascertained by using multiple sources of evidence and by asking the key informant to review the pilot case report. The DSA method itself includes procedures assuring validity. Because of the choice to conduct a single case study, external validity was not even pursued. High reliability was pursued through detailed documentation and thorough reporting of evidence. It was concluded that the core of the concept of reference is a customer relationship regardless of the concrete forms a reference might take in its utilization. Depending on various contingencies, references might have various tasks inside the four roles of increasing 1) efficiency of sales and sales management, 2) efficiency of the business, 3) effectiveness of marketing activities, and 4) effectiveness in establishing, maintaining and enhancing customer relationships. Thus, references have not only external but internal tasks as well. A supplier's reference behavior might be affected by many hierarchical conditions. Additionally, the empirical study showed that the supplier can utilize its references as a continuous, all pervasive decision making process through various practices. The process includes both individual and unstructured decision making subprocesses. The proposed concept of reference can be used to guide a reference policy recommendable for companies for which the utilization of references is important. The significance of the study is threefold: proposing the concept of reference, developing a framework of a supplier's reference behavior and its short term process of utilizing references, and conceptual structuring of an unstructured and in industrial marketing important phenomenon to four roles.
Resumo:
Globalization has increased transport aggregates’ demand. Whilst transport volumes increase, ecological values’im portance has sharpened: carbon footprint has become a measure known world widely. European Union together with other communities emphasizes friendliness to the environment: same trend has extended to transports. As a potential substitute for road transport is noted railway transport, which decreases the congestions and lowers the emission levels. Railway freight market was liberalized in the European Union 2007, which enabled new operators to enter the markets. This research had two main objectives. Firstly, it examined the main market entry strategies utilized and the barriers to entry confronted by the operators who entered the markets after the liberalization. Secondly, the aim was to find ways the governmental organization could enhance its service towards potential railway freight operators. Research is a qualitative case study, utilizing descriptive analytical research method with a normative shade. Empirical data was gathered by interviewing Swedish and Polish railway freight operators by using a semi-structured theme-interview. This research provided novel information by using first-hand data; topic has been researched previously by utilizing second-hand data and literature analyses. Based on this research, rolling stock acquisition, needed investments and bureaucracy generate the main barriers to entry. The research results show that the mostly utilized market entry strategies are start-up and vertical integration. The governmental organization could enhance the market entry process by organizing courses, paying extra attention on flexibility, internal know-how and educating the staff.
Resumo:
The amphiphilic nature of metal extractants causes the formation of micelles and other microscopic aggregates when in contact with water and an organic diluent. These phenomena and their effects on metal extraction were studied using carboxylic acid (Versatic 10) and organophosphorus acid (Cyanex 272) based extractants. Special emphasis was laid on the study of phase behaviour in a pre neutralisation stage when the extractant is transformed to a sodium or ammonium salt form. The pre neutralised extractants were used to extract nickel and to separate cobalt and nickel. Phase diagrams corresponding to the pre neutralisation stage in a metal extraction process were determined. The maximal solubilisation of the components in the system water(NH3)/extractant/isooctane takes place when the molar ratio between the ammonia salt form and the free form of the extractant is 0.5 for the carboxylic acid and 1 for the organophosphorus acid extractant. These values correspond to the complex stoichiometry of NH4A•HA and NIi4A, respectively. When such a solution is contacted with water a microemulsion is formed. If the aqueous phase contains also metal ions (e.g. Ni²+), complexation will take place on the microscopic interface of the micellar aggregates. Experimental evidence showing that the initial stage of nickel extraction with pre neutralised Versatic 10 is a fast pseudohomogeneous reaction was obtained. About 90% of the metal were extracted in the first 15 s after the initial contact. For nickel extraction with pre neutralised Versatic 10 it was found that the highest metal loading and the lowest residual ammonia and water contents in the organic phase are achieved when the feeds are balanced so that the stoichiometry is 2NH4+(org) = Nit2+(aq). In the case of Co/Ni separation using pre neutralised Cyanex 272 the highest separation is achieved when the Co/extractant molar ratio in the feeds is 1 : 4 and at the same time the optimal degree of neutralisation of the Cyanex 272 is about 50%. The adsorption of the extractants on solid surfaces may cause accumulation of solid fine particles at the interface between the aqueous and organic phases in metal extraction processes. Copper extraction processes are known to suffer of this problem. Experiments were carried out using model silica and mica particles. It was found that high copper loading, aromacity of the diluent, modification agents and the presence of aqueous phase decrease the adsorption of the hydroxyoxime on silica surfaces.
Resumo:
This work proposes a method of visualizing the trend of research in the field of ceramic membranes from 1999 to 2006. The presented approach involves identifying problems encountered during research in the field of ceramic membranes. Patents from US patent database and articles from Science Direct(& by ELSEVIER was analyzed for this work. The identification of problems was achieved with software Knowledgist which focuses on the semantic nature of a sentence to generate series of subject action object structures. The identified problems are classified into major research issues. This classification was used for the visualization of the intensity of research. The image produced gives the relation between the number of patents, with time and the major research issues. The identification of the most cited papers which strongly influence the research of the previously identified major issues in the given field was also carried out. The relations between these papers are presented using the metaphor of social network. The final result of this work are two figures, a diagram showing the change in the studied problems a specified period of time and a figure showing the relations between the major papers and groups of the problems
Resumo:
Liquid-liquid extraction is a mass transfer process for recovering the desired components from the liquid streams by contacting it to non-soluble liquid solvent. Literature part of this thesis deals with theory of the liquid-liquid extraction and the main steps of the extraction process design. The experimental part of this thesis investigates the extraction of organic acids from aqueous solution. The aim was to find the optimal solvent for recovering the organic acids from aqueous solutions. The other objective was to test the selected solvent in pilot scale with packed column and compare the effectiveness of the structured and the random packing, the effect of dispersed phase selection and the effect of packing material wettability properties. Experiments showed that selected solvent works well with dilute organic acid solutions. The random packing proved to be more efficient than the structured packing due to higher hold-up of the dispersed phase. Dispersing the phase that is present in larger volume proved to more efficient. With the random packing the material that was wetted by the dispersed phase was more efficient due to higher hold-up of the dispersed phase. According the literature, the behavior is usually opposite.
Resumo:
The increasing power demand and emerging applications drive the design of electrical power converters into modularization. Despite the wide use of modularized power stage structures, the control schemes that are used are often traditional, in other words, centralized. The flexibility and re-usability of these controllers are typically poor. With a dedicated distributed control scheme, the flexibility and re-usability of the system parts, building blocks, can be increased. Only a few distributed control schemes have been introduced for this purpose, but their breakthrough has not yet taken place. A demand for the further development offlexible control schemes for building-block-based applications clearly exists. The control topology, communication, synchronization, and functionality allocationaspects of building-block-based converters are studied in this doctoral thesis. A distributed control scheme that can be easily adapted to building-block-based power converter designs is developed. The example applications are a parallel and series connection of building blocks. The building block that is used in the implementations of both the applications is a commercial off-the-shelf two-level three-phase frequency converter with a custom-designed controller card. The major challenge with the parallel connection of power stages is the synchronization of the building blocks. The effect of synchronization accuracy on the system performance is studied. The functionality allocation and control scheme design are challenging in the seriesconnected multilevel converters, mainly because of the large number of modules. Various multilevel modulation schemes are analyzed with respect to the implementation, and this information is used to develop a flexible control scheme for modular multilevel inverters.
Resumo:
Det lutherska missionsarbetet i Thailand startade inte förrän år 1976 då norska lutherska missionärer anlände till landet. Ett par år senare inledde Finska Missionssällskapet samarbete med norrmännen och med tiden anslöt sig även tre asiatiska lutherska kyrkor till missionssamarbetet i Thailand. Från första början var målet för missionsarbetet att grunda en självständig luthersk kyrka i Thailand. Detta skedde år 1994, 18 år efter att arbetet i Thailand hade inletts. I avhandlingen granskas vilka arbetsmetoder och verksamhetsformer som användes och hur grundandet av en självständig nationell luthersk kyrka förbereddes och förverkligades. I avhandlingen synas även den lutherska missionen i Thailand i förhållande till samtida internationellt missionstänkande och strömningar inom den kristna världsmissionen. Slutligen sätts den lutherska missionen i Thailand in i en thailändsk kulturell och religiös kontext.
Knowledge Sharing between Generations in an Organisation - Retention of the Old or Building the New?
Resumo:
The study explores knowledge transfer between retiring employees and their successors in expert work. My aim is to ascertain whether there is knowledge development or building new knowledge related to this organisational knowledge transfer between generations; in other words, is the transfer of knowledge from experienced, retiring employees to their successors merely retention of the existing organisational knowledge by distributing it from one individual to another or does this transfer lead to building new and meaningful organisational knowledge. I call knowledge transfer between generations and the possibly related knowledge building in this study knowledge sharing between generations. The study examines the organisation and knowledge management from a knowledge-based and constructionist view. From this standpoint, I see knowledge transfer as an interactive process, and the exploration is based on how the people involved in this process understand and experience the phenomenon studied. The research method is organisational ethnography. I conducted the analysis of data using thematic analysis and the articulation method, which has not been used before in organisational knowledge studies. The primary empirical data consists of theme interviews with twelve employees involved in knowledge transfer in the organisation being studied and five follow-up theme interviews. Six of the interviewees are expert duty employees due to retire shortly, and six are their successors. All those participating in the follow-up interviews are successors of those soon to retire from their expert responsibilities. The organisation in the study is a medium-sized Finnish firm, which designs and manufactures electrical equipment and systems for the global market. The results of the study show that expert work-related knowledge transfer between generations can mean knowledge building which produces new, meaningful knowledge for the organisation. This knowledge is distributed in the organisation to all those that find it useful in increasing the efficiency and competitiveness of the whole organisation. The transfer and building of knowledge together create an act of knowledge sharing between generations where the building of knowledge presupposes transfer. Knowledge sharing proceeds between the expert and the novice through eight phases. During the phases of knowledge transfer the expert guides the novice to absorb the knowledge to be transferred. With the expert’s help the novice gradually comes to understand the knowledge and in the end he or she is capable of using it in his or her work. During the phases of knowledge building the expert helps the novice to further develop the knowledge being transferred so that it becomes new, useful knowledge for the organisation. After that the novice takes the built knowledge to use in his or her work. Based on the results of the study, knowledge sharing between generations takes place in interaction and ends when knowledge is taken to use. The results I obtained in the interviews by the articulation method show that knowledge sharing between generations is shaped by the novices’ conceptions of their own work goals, knowledge needs and duties. These are not only based on the official definition of the work, but also how the novices find their work or how they prioritise the given objectives and responsibilities. The study shows that the novices see their work primarily as maintenance or development. Those primarily involved in maintenance duties do not necessarily need knowledge defined as transferred between generations. Therefore, they do not necessarily transfer knowledge with their assigned experts, even though this can happen in favourable circumstances. They do not build knowledge because their view of their work goals and duties does not require the building of new knowledge. Those primarily involved in development duties, however, do need knowledge available from their assigned experts. Therefore, regardless of circumstances they transfer knowledge with their assigned experts and also build knowledge because their work goals and duties create a basis for building new knowledge. The literature on knowledge transfer between generations has focused on describing either the knowledge being transferred or the means by which it is transferred. Based on the results of this study, however, knowledge sharing between generations, that is, transfer and building is determined by how the novice considers his or her own knowledge needs and work practices. This is why studies on knowledge sharing between generations and its implementation should be based not only on the knowledge content and how it is shared, but also on the context of the work in which the novice interprets and shares knowledge. The existing literature has not considered the possibility that knowledge transfer between generations may mean building knowledge. The results of this study, however, show that this is possible. In knowledge building, the expert’s existing organisational knowledge is combined with the new knowledge that the novice brings to the organisation. In their interaction this combination of the expert’s “old” and the novice’s “new” knowledge becomes new, meaningful organisational knowledge. Previous studies show that knowledge development between the members of an organisation is the prerequisite for organisational renewal which in turn is essential for improved competitiveness. Against this background, knowledge building enables organisational renewal and thus enhances competitiveness. Hence, when knowledge transfer between generations is followed by knowledge building, the organisation kills two birds with one stone. In knowledge transfer the organisation retains the existing knowledge and thus maintains its competitiveness. In knowledge building the organisation developsnew knowledge and thus improves its competitiveness.
Resumo:
Object-oriented programming is a widely adopted paradigm for desktop software development. This paradigm partitions software into separate entities, objects, which consist of data and related procedures used to modify and inspect it. The paradigm has evolved during the last few decades to emphasize decoupling between object implementations, via means such as explicit interface inheritance and event-based implicit invocation. Inter-process communication (IPC) technologies allow applications to interact with each other. This enables making software distributed across multiple processes, resulting in a modular architecture with benefits in resource sharing, robustness, code reuse and security. The support for object-oriented programming concepts varies between IPC systems. This thesis is focused on the D-Bus system, which has recently gained a lot of users, but is still scantily researched. D-Bus has support for asynchronous remote procedure calls with return values and a content-based publish/subscribe event delivery mechanism. In this thesis, several patterns for method invocation in D-Bus and similar systems are compared. The patterns that simulate synchronous local calls are shown to be dangerous. Later, we present a state-caching proxy construct, which avoids the complexity of properly asynchronous calls for object inspection. The proxy and certain supplementary constructs are presented conceptually as generic object-oriented design patterns. The e ect of these patterns on non-functional qualities of software, such as complexity, performance and power consumption, is reasoned about based on the properties of the D-Bus system. The use of the patterns reduces complexity, but maintains the other qualities at a good level. Finally, we present currently existing means of specifying D-Bus object interfaces for the purposes of code and documentation generation. The interface description language used by the Telepathy modular IM/VoIP framework is found to be an useful extension of the basic D-Bus introspection format.