951 resultados para after sales service
Governor Forrest Anderson’s Leadership & Political Acumen -- Alec Hansen “In the Crucible of Change”
Resumo:
Montana Governor Forrest Anderson was perhaps the most experienced and qualified person ever to be elected as Governor of Montana. Having previously served as a county attorney, a member of the legislature, a Supreme Court Justice, and twelve years as Attorney General, Anderson roared to a large victory in 1968 over the Incumbent GOP Governor Tim Babcock. Though the progressive change period in Montana began a few years earlier, Anderson’s 1968 win catapulted progressive policy-making into the mainstream of Montana political and governmental affairs. He used his unique skills and leadership to craftily architect the reorganization of the executive branch which had been kept weak since statehood so that the peoples’ government would not be able to challenge corporations who so dominated Montana. Anderson, whose “Pay More, What For?” campaign slogan strongly separated him from Tim Babcock and the GOP on the sales tax issue, not only beat back the regressive sales tax in the 1968 election, but oversaw its demise at the polls in 1971, shaping politics in Montana for decades to come. Anderson also was a strong proponent of the concept of a new Montana Constitution and contributed strategically to its calling and passage. Anderson served only one term as Governor for health reasons, but made those four years a launch pad for progressive politics and government in Montana. In this film, Alec Hansen, Special Assistant to Governor Anderson, provides an insider’s perspective as he reflects on the unique way in which Governor Anderson got things done at this critical period “In the Crucible of Change.” Alec Hansen is best known in Montana political and governmental circles as the long-time chief of the Montana League of Cities and Towns, but he cut his teeth in public service with Governor Forrest Anderson. Alec was born in Butte in 1941, attended local schools graduating from Butte High in 1959. After several years working as a miner and warehouseman for the Anaconda Company in Butte, he attended UM and graduated in History and Political Science in 1966. He joined the U.S. Navy and served with amphibious forces in Vietnam. After discharge from the Navy in 1968, he worked as a news and sports reporter for The Montana Standard in Butte until in September of 1969 he joined Governor Anderson as a Special Assistant focused on press, communications and speech-writing. Alec has noted that drafts were turned into pure Forrest Anderson remarks by the man himself. He learned at the knee of “The Fox” for the rest of Anderson’s term and continued with Governor Tom Judge for two years before returning to Butte to work for the Anaconda Company as the Director of Communications for Montana operations. In 1978, after Anaconda was acquired by the Atlantic Richfield Company, Alec went to work in February for U.S. Senator Paul Hatfield in Washington D.C., leaving after Hatfield’s primary election loss in June 1978. He went back to work for Gov. Judge, remaining until the end of 1980. In 1981 Alec worked as a contract lobbyist and news and sports reporter for the Associated Press in Helena. In 1982, the Montana League of Cities and Towns hired him as Executive Director, a position he held until retirement in 2014. Alec and his wife Colleen, are the parents of two grown children, with one grandson.
Resumo:
The dramatic period of progressive change in Montana that is documented "In the Crucible of Change" series really exploded with the election of Governors Forrest Anderson and Tom Judge. Anderson's single term saw the dispatching of the sales tax as an issue for a long period, the reorganization of the executive branch of state government and the revision of Montana's Constitution. As a former legislator, county attorney, Supreme Court justice, and Attorney General, Anderson brought unmatched experience to the governorship when elected. Tom Judge, although much younger (elected MT’s youngest governor at age 38 immediately following Anderson), also brought serious experience to the governorship: six years as a MT State Representative, two years as a MT State Senator, four years is Lieutenant Governor and significant business experience. The campaign and election of John F. Kennedy in 1960 spurred other young Americans to service, including Tom Judge. First elected in 1960, he rose rapidly through MT’s political-governmental hierarchy until he took over the governorship in time to implement many of the changes started in Governor Anderson’s term. But as a strong progressive leader in his own right, Governor Judge sponsored and implemented significant advancements of his own for Montana. Those accomplishments, however, are the subject of other films in this series. This film deals with Tom Judge’s early years – his rise to the governorship from when he returned home after college at Notre Dame and newspaper experience in Kentucky to his actual election in November 1972. That story is discussed in this episode by three major players in the effort, all directly involved in Tom Judge’s early years and path to the governorship: Sidney Armstrong, Larry Pettit and Kent Kleinkopf. Their recollections of the early Tom Judge and the period of his advancement to the governorship provide an insider’s perspective of the growth of this significant leader of the important period of progressive change documented “In the Crucible of Change.” Sidney Armstrong, President of Sidney Armstrong Consulting, serves on the board and as the Executive Director of the Greater Montana Foundation. Formerly Executive Director of the Montana Community Foundation (MCF), she has served on national committees and participated in national foundation initiatives. While at MCF, she worked extensively with MT Governors Racicot and Martz on the state charitable endowment tax credit and other endowed philanthropy issues. A member of MT Governor Thomas L. Judge’s staff in the 1970s, she was also part of Governor Brian Schweitzer’s 2004 Transition Team, continuing to serve as a volunteer advisor during his term. In the 1980s, Sidney also worked for the MT State AFL-CIO and the MT Democratic Party as well as working two sessions with the MT Senate as Assistant Secretary of the Senate and aide to the President. A Helena native, and great granddaughter of pioneer Montanans, Sidney has served on numerous nonprofit boards, and is currently a board member for the Montana History Foundation. Recently she served on the board of the Holter Museum of Art and was a Governor’s appointee to the Humanities Montana board. She is a graduate of the International School of Geneva, Switzerland and the University of Montana. Armstrong's Irish maternal immigrant great-grandparents, Thomas and Maria Cahill Cooney, came to Virginia City, MT in a covered wagon in 1865, looking for gold. Eventually, they settled on the banks of the Missouri River outside Helena as ranchers. She also has roots in Butte, MT, where her journalist father's family, both of whom were newspaper people, lived. Her father, Richard K. O’Malley, is also the author of a well-known book about Butte, Mile High, Mile Deep, recently re-published by Russell Chatham. She is the mother of four and the grandmother of eight. Dr. Lawrence K. Pettit (Larry Pettit) (b. 5/2/1937) has had a dual career in politics and higher education. In addition to being Montana’s first Commissioner of Higher Education (the subject of another film in this series); Pettit, of Lewistown, served as legislative assistant to U.S. Senators James E. Murray and Lee Metcalf, campaign manager, head of transition team and assistant to Montana Governor Thomas L. Judge; taught political science at The Pennsylvania State University (main campus), was chair of political science at Montana State University, Deputy Commissioner for Academic Programs at the Texas Higher Education Coordinating Board, Chancellor of the University System of South Texas (since merged with Texas A&M University), President of Southern Illinois University, and President of Indiana University of Pennsylvania from where he retired in 2003. He has served as chair of the Commission on Leadership for the American Council on Education, president of the National Association of (University) System Heads, and on many national and state boards and commissions in higher education. Pettit is author of “If You Live by the Sword: Politics in the Making and Unmaking of a University President.” More about Pettit is found at http://www.lawrencekpettit.com… Kent Kleinkopf of Missoula is co-founder of a firm with a national scope of business that specializes in litigation consultation, expert vocational testimony, and employee assistance programs. His partner (and wife of 45 years) Kathy, is an expert witness in the 27 year old business. Kent received a BA in History/Education from the University of Idaho and an MA in Economics from the University of Utah. The Kleinkopfs moved to Helena, MT in 1971 where he was Assistant to the Commissioner of State Lands (later Governor) Ted Schwinden. In early 1972 Kent volunteered full time in Lt. Governor Tom Judge’s campaign for Governor, driving the Lt. Governor extensively throughout Montana. After Judge was elected governor, Kent briefly joined the staff of Governor Forrest Anderson, then in 1973 transitioned to Judge’s Governor’s Office staff, where he became Montana’s first “Citizens’ Advocate.” In that capacity he fielded requests for assistance from citizens with concerns and information regarding State Agencies. While on the Governor’s staff, Kent continued as a travel aide with the governor both in Montana and nationally. In 1977 Kent was appointed Director of the MT Department of Business Regulation. That role included responsibility as Superintendent of Banking and Chairman of the State Banking Board, where Kent presided over the chartering of many banks, savings and loans, and credit unions. In 1981 the Kleinkopfs moved to Missoula and went into the business they run today. Kent was appointed by Governor Brian Schweitzer to the Board of the Montana Historical Society in 2006, was reappointed and continues to serve. Kathy and Kent have a daughter and son-in-law in Missoula.
Resumo:
The article focuses on the current situation of Spanish case law on ISP liability. It starts by presenting the more salient peculiarities of the Spanish transposition of the safe harbours laid down in the E-Commerce Directive. These peculiarities relate to the knowledge requirement of the hosting safe harbour, and to the safe harbour for information location tools. The article then provides an overview of the cases decided so far with regard to each of the safe harbours. Very few cases have dealt with the mere conduit and the caching safe harbours, though the latter was discussed in an interesting case involving Google’s cache. Most cases relate to hosting and linking safe harbours. With regard to hosting, the article focuses particularly on the two judgments handed down by the Supreme Court that hold an open interpretation of actual knowledge, an issue where courts had so far been split. Cases involving the linking safe harbour have mainly dealt with websites offering P2P download links. Accordingly, the article explores the legal actions brought against these sites, which for the moment have been unsuccessful. The new legislative initiative to fight against digital piracy – the Sustainable Economy Bill – is also analyzed. After the conclusion, the article provides an Annex listing the cases that have dealt with ISP liability in Spain since the safe harbours scheme was transposed into Spanish law.
Resumo:
This study of ambulance workers for the emergency medical services of the City of Houston studied the factors related to shiftwork tolerance and intolerance. The EMS personnel work a 24-hour shift with rotating days of the week. Workers are assigned to A, B, C, D shift, each of which rotate 24-hours on, 24-hours off, 24-hours on and 4 days off. One-hundred and seventy-six male EMTs, paramedics and chauffeurs from stations of varying levels of activity were surveyed. The sample group ranged in age from 20 to 45. The average tenure on the job was 8.2 years. Over 68% of the workers held a second job, the majority of which worked over 20 hours a week at the second position.^ The survey instrument was a 20-page questionnaire modeled after the Folkard Standardized Shiftwork Index. In addition to demographic data, the survey tool provided measurements of general job satisfaction, sleep quality, general health complaints, morningness/eveningness, cognitive and somatic anxiety, depression, and circadian types. The survey questionnaire included an EMS-specific scaler of stress.^ A conceptual model of Shiftwork Tolerance was presented to identify the key factors examined in the study. An extensive list of 265 variables was reduced to 36 key variables that related to: (1) shift schedule and demographic/lifestyle factors, (2) individual differences related to traits and characteristics, and (3) tolerance/intolerance effects. Using the general job satisfaction scaler as the key measurement of shift tolerance/intolerance, it was shown that a significant relationship existed between this dependent variable and stress, number of years working a 24-hour shift, sleep quality, languidness/vigorousness. The usual amount of sleep received during the shift, general health complaints and flexibility/rigidity (R$\sp2$ =.5073).^ The sample consisted of a majority of morningness-types or extreme-morningness types, few evening-types and no extreme-evening types, duplicating the findings of Motohashi's previous study of ambulance workers. The level of activity by station was not significant on any of the dependent variables examined. However, the shift worked had a relationship with sleep quality, despite the fact that all shifts work the same hours and participate in the same rotation schedule. ^
Resumo:
Digitization, sophisticated fiber-optic networks and the resultant convergence of the media, communications and information technology industries have completely transformed the communications ecosystem in the last couple of decades. New contingent business and social models were created that have been mirrored in the amended communications regimes. Yet, despite an overhaul of the communications regulation paradigm, the status of and the rules on universal service have remained surprisingly intact, both during and after the liberalization exercise. The present paper looks into this paradox and examines the sustainability of the existing concept of universal service. It suggests that there is a need for a novel concept of universal service in the digital networked communications environment, whose objectives go beyond the conventional internalizing and redistributional rationales and concentrate on communication and information networks as a public good, where not only access to infrastructure but also access to content may be essential.
Resumo:
In a representative cross-sectional study during 12 months of the years 2008/2009 in four abattoirs in Switzerland, lung and pleura lesions as well as lesions of slaughter carcasses and organs of 34 706 pigs were studied for frequency and type of macroscopic lesions. Of the 24276 examined pigs, 91.2% of the lungs, 94.4% of the heart and 95.5% of the livers showed no macroscopically visible lesions. Pigs that were produced for a label program had significantly less bronchopneumonia and pneumonia residuals, pleuritis and liver lesions due to echinococcosis. Pigs supervised by the Swiss Pig Health Service (SGD), showed significantly less bronchopneumonia and pneumonia residuals, diffuse pleuritis, pleuritis/pericarditis and milkspots compared to the non-SGD supervised farms. Thanks to the national eradication program for enzootic pneumonia (EP) and actinobacillosis, the health-status of lungs has been considerably improved and the prevalence of pleurisy decreased considerably. The results of this study indicate a good herd health in Swiss pig production.
Resumo:
Background. Each year thousands of people participate in mass health screenings for diabetes and hypertension, but little is known about whether or not those who receive higher than normal screening results obtain the recommended follow-up medical care, or what barriers they perceive to doing so. ^ Methods. Study participants were recruited from attendees at three health fairs in low-income neighborhoods in Houston, Texas Potential participants had higher than normal blood pressure (> 90/140 mgHg) or blood glucose readings (100 mm/dL fasting or 140 mm/dL random). Study participants were called at one, two, and three months and asked if they had obtained follow-up medical care; those who had not yet obtained follow-up care were asked to identify barriers. Using a modified Aday-Andersen model of health service access, the independent variables were individual and community characteristics and self-perceived need. The dependent variable was obtaining follow-up care, with barriers to care a secondary outcome. ^ Results. Eighty-two study participants completed the initial questionnaire and 59 participants completed the study protocol. Forty-eight participants (59% under an intent to treat analysis, 81% of those completing the study protocol) obtained follow-up care. Those who completed the initial questionnaire and who reported a regular source of care were significantly more likely to obtain follow-up care. For those who completed the study protocol the relationship between having a regular source of care and obtaining follow-up care approached but did not reach significance. For those who completed the initial questionnaire, self-described health status, when examined as a binary variable (good, very good, excellent, or poor, fair, not sure) was associated with obtaining follow-up care for those who rated their health as poor, fair, or not sure. While the group who completed the study protocol did not reach statistical significance, the same relationship between self-described health status of poor, fair, or not sure and obtaining follow-up care was present. The participants who completed the study protocol and described their blood pressure as OK or a little high were statistically more likely to get follow-up care than those who described it as high or very high. All those on oral medications for hypertension (12/12) and diabetes (4/4) who were told to obtain follow-up care did so; however, the small sample size allows this correlation to be of statistical significance only for those treating hypertension. ^ The variables significantly associated with obtaining follow-up care were having a regular source of care, self-described health status of poor, fair, or not sure, self-described blood pressure of OK or a little high, and taking medication for blood pressure. ^ At the follow-up telephone calls, 34 participants identified barriers to care; cost was a significant barrier reported by 16 participants, and 10 reported that they didn’t have time because they were working long hours after Hurricane Ike. ^ The study included the offer of access assistance: information about nearby safety-net providers, a visit to or information from the Health Information Center at their Neighborhood Center location, or information from Project Safety Net (a searchable web site for safety net providers). Access assistance was offered at the health fairs and then again at follow-up telephone calls to those who had not yet obtained follow-up care. Of the 48 participants who reported obtaining follow-up care, 26 said they had made use of the access assistance to do so. The use of access assistance was associated with being Hispanic, not having health insurance or a regular source of care, and speaking Spanish. It was also associated with being worried about blood glucose. ^ Conclusion. Access assistance, as a community enabling characteristic, may be useful in aiding low-income people in obtaining medical care. ^
Resumo:
Patients who had started HAART (Highly Active Anti-Retroviral Treatment) under previous aggressive DHHS guidelines (1997) underwent a life-long continuous HAART that was associated with many short term as well as long term complications. Many interventions attempted to reduce those complications including intermittent treatment also called pulse therapy. Many studies were done to study the determinants of rate of fall in CD4 count after interruption as this data would help guide treatment interruptions. The data set used here was a part of a cohort study taking place at the Johns Hopkins AIDS service since January 1984, in which the data were collected both prospectively and retrospectively. The patients in this data set consisted of 47 patients receiving via pulse therapy with the aim of reducing the long-term complications. ^ The aim of this project was to study the impact of virologic and immunologic factors on the rate of CD4 loss after treatment interruption. The exposure variables under investigation included CD4 cell count and viral load at treatment initiation. The rates of change of CD4 cell count after treatment interruption was estimated from observed data using advanced longitudinal data analysis methods (i.e., linear mixed model). Using random effects accounted for repeated measures of CD4 per person after treatment interruption. The regression coefficient estimates from the model was then used to produce subject specific rates of CD4 change accounting for group trends in change. The exposure variables of interest were age, race, and gender, CD4 cell counts and HIV RNA levels at HAART initiation. ^ The rate of fall of CD4 count did not depend on CD4 cell count or viral load at initiation of treatment. Thus these factors may not be used to determine who can have a chance of successful treatment interruption. CD4 and viral load were again studied by t-tests and ANOVA test after grouping based on medians and quartiles to see any difference in means of rate of CD4 fall after interruption. There was no significant difference between the groups suggesting that there was no association between rate of fall of CD4 after treatment interruption and above mentioned exposure variables. ^
Resumo:
A life table methodology was developed which estimates the expected remaining Army service time and the expected remaining Army sick time by years of service for the United States Army population. A measure of illness impact was defined as the ratio of expected remaining Army sick time to the expected remaining Army service time. The variances of the resulting estimators were developed on the basis of current data. The theory of partial and complete competing risks was considered for each type of decrement (death, administrative separation, and medical separation) and for the causes of sick time.^ The methodology was applied to world-wide U.S. Army data for calendar year 1978. A total of 669,493 enlisted personnel and 97,704 officers were reported on active duty as of 30 September 1978. During calendar year 1978, the Army Medical Department reported 114,647 inpatient discharges and 1,767,146 sick days. Although the methodology is completely general with respect to the definition of sick time, only sick time associated with an inpatient episode was considered in this study.^ Since the temporal measure was years of Army service, an age-adjusting process was applied to the life tables for comparative purposes. Analyses were conducted by rank (enlisted and officer), race and sex, and were based on the ratio of expected remaining Army sick time to expected remaining Army service time. Seventeen major diagnostic groups, classified by the Eighth Revision, International Classification of Diseases, Adapted for Use In The United States, were ranked according to their cumulative (across years of service) contribution to expected remaining sick time.^ The study results indicated that enlisted personnel tend to have more expected hospital-associated sick time relative to their expected Army service time than officers. Non-white officers generally have more expected sick time relative to their expected Army service time than white officers. This racial differential was not supported within the enlisted population. Females tend to have more expected sick time relative to their expected Army service time than males. This tendency remained after diagnostic groups 580-629 (Genitourinary System) and 630-678 (Pregnancy and Childbirth) were removed. Problems associated with the circulatory system, digestive system and musculoskeletal system were among the three leading causes of cumulative sick time across years of service. ^
Resumo:
The University Compost Facility, 52274 260th St., Ames, Iowa has completed three full years of operation. The facility is managed by the ISU Research Farms and has a separate revolving account that receives fees and sales, and pays expenses. The facility is designed to be self-supporting, i.e. not receive allocations for its operations. The facility consists of seven, 80 × 140 ft hoop barns and a new 55 × 120 ft hoop barn, all with paved floors. The facility also has a Mettler-Toledo electronic scale with a 10 ft × 70 ft platform to weigh all materials. Key machinery is 1) compost turner, a used pull-type Aeromaster PT-170, 14 ft wide made by Midwest Biosystems, Tampico, IL; 2) a converted dump truck trailer used to construct windrows and haul material; 3) telehandler, Caterpillar TH407 with cab and 2.75 cubic yard bucket; and 4) tractor, John Deere 7520 (125 hp) with IVT (Infinite Variable Transmission) and front-wheel assist used to pull the turner and dump trailer.
Resumo:
Supply chain management works to bring the supplier, the distributor, and the customer into one cohesive process. The Supply Chain Council defined supply chain as ‘Supply Chain: The flow and transformation of raw materials into products from suppliers through production and distribution facilities to the ultimate consumer., and then Sunil Chopra and Meindl, (2001) have define Supply chain management as ‘Supply Chain Management involves the flows between and among stages in a supply chain to maximize total profitability.’ After 1950, supply chain management got a boost with the production and manufacturing sector getting highest attention. The inventory became the responsibility of the marketing, accounting and production areas. Order processing was part of accounting and sales. Supply chain management became one of the most powerful engines of business transformation. It is the one area where operational efficiency can be gained. It reduces organizations costs and enhances customer service. With the liberalization of world trade, globalization, and emergence of the new markets, many organizations have customers and competitions throughout the world, either directly or indirectly. Business communities are aware that global competitiveness is the key to the success of a business. Competitiveness is ability to produce, distribute and provide products and services for the open market in competition with others. The supply chain, a critical link between supplier, producer and customer is emerged now as an essential business process and a strategic lever, potential value contributor a differentiator for the success of any business. Supply chain management is the management of all internal and external processes or functions to satisfy a customer’s order (from raw materials through conversion and manufacture through logistics delivery.). Goods-either in raw form or processed, whole sale or retailed distribution, business or technology services, in everyday life- in the business or household- directly or indirectly supply chain is ubiquitously associated in expanding socio-economic development. Supply chain growth competitive performance and supporting strong growth impulse at micro as well as micro economic levels. Keeping the India vision at the core of the objective, the role of supply chain is to take up social economic challenges, improve competitive advantages, develop strategies, built capabilities, enhance value propositions, adapt right technology, collaborate with stakeholders and deliver environmentally sustainable outcomes with minimum resources.
Resumo:
This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.
Resumo:
Las Tecnologías de la Información y la Comunicación en general e Internet en particular han supuesto una revolución en nuestra forma de comunicarnos, relacionarnos, producir, comprar y vender acortando tiempo y distancias entre proveedores y consumidores. A la paulatina penetración del ordenador, los teléfonos inteligentes y la banda ancha fija y/o móvil ha seguido un mayor uso de estas tecnologías entre ciudadanos y empresas. El comercio electrónico empresa–consumidor (B2C) alcanzó en 2010 en España un volumen de 9.114 millones de euros, con un incremento del 17,4% respecto al dato registrado en 2009. Este crecimiento se ha producido por distintos hechos: un incremento en el porcentaje de internautas hasta el 65,1% en 2010 de los cuales han adquirido productos o servicios a través de la Red un 43,1% –1,6 puntos porcentuales más respecto a 2010–. Por otra parte, el gasto medio por comprador ha ascendido a 831€ en 2010, lo que supone un incremento del 10,9% respecto al año anterior. Si segmentamos a los compradores según por su experiencia anterior de compra podemos encontrar dos categorías: el comprador novel –que adquirió por primera vez productos o servicios en 2010– y el comprador constante –aquel que había adquirido productos o servicios en 2010 y al menos una vez en años anteriores–. El 85,8% de los compradores se pueden considerar como compradores constantes: habían comprado en la Red en 2010, pero también lo habían hecho anteriormente. El comprador novel tiene un perfil sociodemográfico de persona joven de entre 15–24 años, con estudios secundarios, de clase social media y media–baja, estudiante no universitario, residente en poblaciones pequeñas y sigue utilizando fórmulas de pago como el contra–reembolso (23,9%). Su gasto medio anual ascendió en 2010 a 449€. El comprador constante, o comprador que ya había comprado en Internet anteriormente, tiene un perfil demográfico distinto: estudios superiores, clase alta, trabajador y residente en grandes ciudades, con un comportamiento maduro en la compra electrónica dada su mayor experiencia –utiliza con mayor intensidad canales exclusivos en Internet que no disponen de tienda presencial–. Su gasto medio duplica al observado en compradores noveles (con una media de 930€ anuales). Por tanto, los compradores constantes suponen una mayoría de los compradores con un gasto medio que dobla al comprador que ha adoptado el medio recientemente. Por consiguiente es de interés estudiar los factores que predicen que un internauta vuelva a adquirir un producto o servicio en la Red. La respuesta a esta pregunta no se ha revelado sencilla. En España, la mayoría de productos y servicios aún se adquieren de manera presencial, con una baja incidencia de las ventas a distancia como la teletienda, la venta por catálogo o la venta a través de Internet. Para dar respuesta a las preguntas planteadas se ha investigado desde distintos puntos de vista: se comenzará con un estudio descriptivo desde el punto de vista de la demanda que trata de caracterizar la situación del comercio electrónico B2C en España, poniendo el foco en las diferencias entre los compradores constantes y los nuevos compradores. Posteriormente, la investigación de modelos de adopción y continuidad en el uso de las tecnologías y de los factores que inciden en dicha continuidad –con especial interés en el comercio electrónico B2C–, permiten afrontar el problema desde la perspectiva de las ecuaciones estructurales pudiendo también extraer conclusiones de tipo práctico. Este trabajo sigue una estructura clásica de investigación científica: en el capítulo 1 se introduce el tema de investigación, continuando con una descripción del estado de situación del comercio electrónico B2C en España utilizando fuentes oficiales (capítulo 2). Posteriormente se desarrolla el marco teórico y el estado del arte de modelos de adopción y de utilización de las tecnologías (capítulo 3) y de los factores principales que inciden en la adopción y continuidad en el uso de las tecnologías (capítulo 4). El capítulo 5 desarrolla las hipótesis de la investigación y plantea los modelos teóricos. Las técnicas estadísticas a utilizar se describen en el capítulo 6, donde también se analizan los resultados empíricos sobre los modelos desarrollados en el capítulo 5. El capítulo 7 expone las principales conclusiones de la investigación, sus limitaciones y propone nuevas líneas de investigación. La primera parte corresponde al capítulo 1, que introduce la investigación justificándola desde un punto de vista teórico y práctico. También se realiza una breve introducción a la teoría del comportamiento del consumidor desde una perspectiva clásica. Se presentan los principales modelos de adopción y se introducen los modelos de continuidad de utilización que se estudiarán más detalladamente en el capítulo 3. En este capítulo se desarrollan los objetivos principales y los objetivos secundarios, se propone el mapa mental de la investigación y se planifican en un cronograma los principales hitos del trabajo. La segunda parte corresponde a los capítulos dos, tres y cuatro. En el capítulo 2 se describe el comercio electrónico B2C en España utilizando fuentes secundarias. Se aborda un diagnóstico del sector de comercio electrónico y su estado de madurez en España. Posteriormente, se analizan las diferencias entre los compradores constantes, principal interés de este trabajo, frente a los compradores noveles, destacando las diferencias de perfiles y usos. Para los dos segmentos se estudian aspectos como el lugar de acceso a la compra, la frecuencia de compra, los medios de pago utilizados o las actitudes hacia la compra. El capítulo 3 comienza desarrollando los principales conceptos sobre la teoría del comportamiento del consumidor, para continuar estudiando los principales modelos de adopción de tecnología existentes, analizando con especial atención su aplicación en comercio electrónico. Posteriormente se analizan los modelos de continuidad en el uso de tecnologías (Teoría de la Confirmación de Expectativas; Teoría de la Justicia), con especial atención de nuevo a su aplicación en el comercio electrónico. Una vez estudiados los principales modelos de adopción y continuidad en el uso de tecnologías, el capítulo 4 analiza los principales factores que se utilizan en los modelos: calidad, valor, factores basados en la confirmación de expectativas –satisfacción, utilidad percibida– y factores específicos en situaciones especiales –por ejemplo, tras una queja– como pueden ser la justicia, las emociones o la confianza. La tercera parte –que corresponde al capítulo 5– desarrolla el diseño de la investigación y la selección muestral de los modelos. En la primera parte del capítulo se enuncian las hipótesis –que van desde lo general a lo particular, utilizando los factores específicos analizados en el capítulo 4– para su posterior estudio y validación en el capítulo 6 utilizando las técnicas estadísticas apropiadas. A partir de las hipótesis, y de los modelos y factores estudiados en los capítulos 3 y 4, se definen y vertebran dos modelos teóricos originales que den respuesta a los retos de investigación planteados en el capítulo 1. En la segunda parte del capítulo se diseña el trabajo empírico de investigación definiendo los siguientes aspectos: alcance geográfico–temporal, tipología de la investigación, carácter y ambiente de la investigación, fuentes primarias y secundarias utilizadas, técnicas de recolección de datos, instrumentos de medida utilizados y características de la muestra utilizada. Los resultados del trabajo de investigación constituyen la cuarta parte de la investigación y se desarrollan en el capítulo 6, que comienza analizando las técnicas estadísticas basadas en Modelos de Ecuaciones Estructurales. Se plantean dos alternativas, modelos confirmatorios correspondientes a Métodos Basados en Covarianzas (MBC) y modelos predictivos. De forma razonada se eligen las técnicas predictivas dada la naturaleza exploratoria de la investigación planteada. La segunda parte del capítulo 6 desarrolla el análisis de los resultados de los modelos de medida y modelos estructurales construidos con indicadores formativos y reflectivos y definidos en el capítulo 4. Para ello se validan, sucesivamente, los modelos de medida y los modelos estructurales teniendo en cuenta los valores umbrales de los parámetros estadísticos necesarios para la validación. La quinta parte corresponde al capítulo 7, que desarrolla las conclusiones basándose en los resultados del capítulo 6, analizando los resultados desde el punto de vista de las aportaciones teóricas y prácticas, obteniendo conclusiones para la gestión de las empresas. A continuación, se describen las limitaciones de la investigación y se proponen nuevas líneas de estudio sobre distintos temas que han ido surgiendo a lo largo del trabajo. Finalmente, la bibliografía recoge todas las referencias utilizadas a lo largo de este trabajo. Palabras clave: comprador constante, modelos de continuidad de uso, continuidad en el uso de tecnologías, comercio electrónico, B2C, adopción de tecnologías, modelos de adopción tecnológica, TAM, TPB, IDT, UTAUT, ECT, intención de continuidad, satisfacción, confianza percibida, justicia, emociones, confirmación de expectativas, calidad, valor, PLS. ABSTRACT Information and Communication Technologies in general, but more specifically those related to the Internet in particular, have changed the way in which we communicate, relate to one another, produce, and buy and sell products, reducing the time and shortening the distance between suppliers and consumers. The steady breakthrough of computers, Smartphones and landline and/or wireless broadband has been greatly reflected in its large scale use by both individuals and businesses. Business–to–consumer (B2C) e–commerce reached a volume of 9,114 million Euros in Spain in 2010, representing a 17.4% increase with respect to the figure in 2009. This growth is due in part to two different facts: an increase in the percentage of web users to 65.1% en 2010, 43.1% of whom have acquired products or services through the Internet– which constitutes 1.6 percentage points higher than 2010. On the other hand, the average spending by individual buyers rose to 831€ en 2010, constituting a 10.9% increase with respect to the previous year. If we select buyers according to whether or not they have previously made some type of purchase, we can divide them into two categories: the novice buyer–who first made online purchases in 2010– and the experienced buyer: who also made purchases in 2010, but had done so previously as well. The socio–demographic profile of the novice buyer is that of a young person between 15–24 years of age, with secondary studies, middle to lower–middle class, and a non–university educated student who resides in smaller towns and continues to use payment methods such as cash on delivery (23.9%). In 2010, their average purchase grew to 449€. The more experienced buyer, or someone who has previously made purchases online, has a different demographic profile: highly educated, upper class, resident and worker in larger cities, who exercises a mature behavior when making online purchases due to their experience– this type of buyer frequently uses exclusive channels on the Internet that don’t have an actual store. His or her average purchase doubles that of the novice buyer (with an average purchase of 930€ annually.) That said, the experienced buyers constitute the majority of buyers with an average purchase that doubles that of novice buyers. It is therefore of interest to study the factors that help to predict whether or not a web user will buy another product or use another service on the Internet. The answer to this question has proven not to be so simple. In Spain, the majority of goods and services are still bought in person, with a low amount of purchases being made through means such as the Home Shopping Network, through catalogues or Internet sales. To answer the questions that have been posed here, an investigation has been conducted which takes into consideration various viewpoints: it will begin with a descriptive study from the perspective of the supply and demand that characterizes the B2C e–commerce situation in Spain, focusing on the differences between experienced buyers and novice buyers. Subsequently, there will be an investigation concerning the technology acceptance and continuity of use of models as well as the factors that have an effect on their continuity of use –with a special focus on B2C electronic commerce–, which allows for a theoretic approach to the problem from the perspective of the structural equations being able to reach practical conclusions. This investigation follows the classic structure for a scientific investigation: the subject of the investigation is introduced (Chapter 1), then the state of the B2C e–commerce in Spain is described citing official sources of information (Chapter 2), the theoretical framework and state of the art of technology acceptance and continuity models are developed further (Chapter 3) and the main factors that affect their acceptance and continuity (Chapter 4). Chapter 5 explains the hypothesis behind the investigation and poses the theoretical models that will be confirmed or rejected partially or completely. In Chapter 6, the technical statistics that will be used are described briefly as well as an analysis of the empirical results of the models put forth in Chapter 5. Chapter 7 explains the main conclusions of the investigation, its limitations and proposes new projects. First part of the project, chapter 1, introduces the investigation, justifying it from a theoretical and practical point of view. It is also a brief introduction to the theory of consumer behavior from a standard perspective. Technology acceptance models are presented and then continuity and repurchase models are introduced, which are studied more in depth in Chapter 3. In this chapter, both the main and the secondary objectives are developed through a mind map and a timetable which highlights the milestones of the project. The second part of the project corresponds to Chapters Two, Three and Four. Chapter 2 describes the B2C e–commerce in Spain from the perspective of its demand, citing secondary official sources. A diagnosis concerning the e–commerce sector and the status of its maturity in Spain is taken on, as well as the barriers and alternative methods of e–commerce. Subsequently, the differences between experienced buyers, which are of particular interest to this project, and novice buyers are analyzed, highlighting the differences between their profiles and their main transactions. In order to study both groups, aspects such as the place of purchase, frequency with which online purchases are made, payment methods used and the attitudes of the purchasers concerning making online purchases are taken into consideration. Chapter 3 begins by developing the main concepts concerning consumer behavior theory in order to continue the study of the main existing acceptance models (among others, TPB, TAM, IDT, UTAUT and other models derived from them) – paying special attention to their application in e–commerce–. Subsequently, the models of technology reuse are analyzed (CDT, ECT; Theory of Justice), focusing again specifically on their application in e–commerce. Once the main technology acceptance and reuse models have been studied, Chapter 4 analyzes the main factors that are used in these models: quality, value, factors based on the contradiction of expectations/failure to meet expectations– satisfaction, perceived usefulness– and specific factors pertaining to special situations– for example, after receiving a complaint justice, emotions or confidence. The third part– which appears in Chapter 5– develops the plan for the investigation and the sample selection for the models that have been designed. In the first section of the Chapter, the hypothesis is presented– beginning with general ideas and then becoming more specific, using the detailed factors that were analyzed in Chapter 4– for its later study and validation in Chapter 6– as well as the corresponding statistical factors. Based on the hypothesis and the models and factors that were studied in Chapters 3 and 4, two original theoretical models are defined and organized in order to answer the questions posed in Chapter 1. In the second part of the Chapter, the empirical investigation is designed, defining the following aspects: geographic–temporal scope, type of investigation, nature and setting of the investigation, primary and secondary sources used, data gathering methods, instruments according to the extent of their use and characteristics of the sample used. The results of the project constitute the fourth part of the investigation and are developed in Chapter 6, which begins analyzing the statistical techniques that are based on the Models of Structural Equations. Two alternatives are put forth: confirmatory models which correspond to Methods Based on Covariance (MBC) and predictive models– Methods Based on Components–. In a well–reasoned manner, the predictive techniques are chosen given the explorative nature of the investigation. The second part of Chapter 6 explains the results of the analysis of the measurement models and structural models built by the formative and reflective indicators defined in Chapter 4. In order to do so, the measurement models and the structural models are validated one by one, while keeping in mind the threshold values of the necessary statistic parameters for their validation. The fifth part corresponds to Chapter 7 which explains the conclusions of the study, basing them on the results found in Chapter 6 and analyzing them from the perspective of the theoretical and practical contributions, and consequently obtaining conclusions for business management. The limitations of the investigation are then described and new research lines about various topics that came up during the project are proposed. Lastly, all of the references that were used during the project are listed in a final bibliography. Key Words: constant buyer, repurchase models, continuity of use of technology, e–commerce, B2C, technology acceptance, technology acceptance models, TAM, TPB, IDT, UTAUT, ECT, intention of repurchase, satisfaction, perceived trust/confidence, justice, feelings, the contradiction of expectations, quality, value, PLS.
Resumo:
The aim of this investigation is to evaluate the passenger?s perception of some attributes related to quality of bus services, and how this perception changes with the implementation of different measures. Surveys to passengers riding different bus lines were conducted in two scenarios: before the implementation of the measures and after the measures were implemented. The results of the passenger surveys were statistically analysed; then, an ordered logit model was used to analyse the differences between surveys thanks to the implemented measures. Finally, a factor analysis was done to identify the underlying unobserved factors (latent variables) that the respondents perceived
Resumo:
Hoy en día asistimos a un creciente interés por parte de la sociedad hacia el cuidado de la salud. Esta afirmación viene apoyada por dos realidades. Por una parte, el aumento de las prácticas saludables (actividad deportiva, cuidado de la alimentación, etc.). De igual manera, el auge de los dispositivos inteligentes (relojes, móviles o pulseras) capaces de medir distintos parámetros físicos como el pulso cardíaco, el ritmo respiratorio, la distancia recorrida, las calorías consumidas, etc. Combinando ambos factores (interés por el estado de salud y disponibilidad comercial de dispositivos inteligentes) están surgiendo multitud de aplicaciones capaces no solo de controlar el estado actual de salud, también de recomendar al usuario cambios de hábitos que lleven hacia una mejora en su condición física. En este contexto, los llamados dispositivos llevables (weareables) unidos al paradigma de Internet de las cosas (IoT, del inglés Internet of Things) permiten la aparición de nuevos nichos de mercado para aplicaciones que no solo se centran en la mejora de la condición física, ya que van más allá proponiendo soluciones para el cuidado de pacientes enfermos, la vigilancia de niños o ancianos, la defensa y la seguridad, la monitorización de agentes de riesgo (como bomberos o policías) y un largo etcétera de aplicaciones por llegar. El paradigma de IoT se puede desarrollar basándose en las existentes redes de sensores inalámbricos (WSN, del inglés Wireless Sensor Network). La conexión de los ya mencionados dispositivos llevables a estas redes puede facilitar la transición de nuevos usuarios hacia aplicaciones IoT. Pero uno de los problemas intrínsecos a estas redes es su heterogeneidad. En efecto, existen multitud de sistemas operativos, protocolos de comunicación, plataformas de desarrollo, soluciones propietarias, etc. El principal objetivo de esta tesis es realizar aportaciones significativas para solucionar no solo el problema de la heterogeneidad, sino también de dotar de mecanismos de seguridad suficientes para salvaguardad la integridad de los datos intercambiados en este tipo de aplicaciones. Algo de suma importancia ya que los datos médicos y biométricos de los usuarios están protegidos por leyes nacionales y comunitarias. Para lograr dichos objetivos, se comenzó con la realización de un completo estudio del estado del arte en tecnologías relacionadas con el marco de investigación (plataformas y estándares para WSNs e IoT, plataformas de implementación distribuidas, dispositivos llevables y sistemas operativos y lenguajes de programación). Este estudio sirvió para tomar decisiones de diseño fundamentadas en las tres contribuciones principales de esta tesis: un bus de servicios para dispositivos llevables (WDSB, Wearable Device Service Bus) basado en tecnologías ya existentes tales como ESB, WWBAN, WSN e IoT); un protocolo de comunicaciones inter-dominio para dispositivos llevables (WIDP, Wearable Inter-Domain communication Protocol) que integra en una misma solución protocolos capaces de ser implementados en dispositivos de bajas capacidades (como lo son los dispositivos llevables y los que forman parte de WSNs); y finalmente, la tercera contribución relevante es una propuesta de seguridad para WSN basada en la aplicación de dominios de confianza. Aunque las contribuciones aquí recogidas son de aplicación genérica, para su validación se utilizó un escenario concreto de aplicación: una solución para control de parámetros físicos en entornos deportivos, desarrollada dentro del proyecto europeo de investigación “LifeWear”. En este escenario se desplegaron todos los elementos necesarios para validar las contribuciones principales de esta tesis y, además, se realizó una aplicación para dispositivos móviles por parte de uno de los socios del proyecto (lo que contribuyó con una validación externa de la solución). En este escenario se usaron dispositivos llevables tales como un reloj inteligente, un teléfono móvil con sistema operativo Android y un medidor del ritmo cardíaco inalámbrico capaz de obtener distintos parámetros fisiológicos del deportista. Sobre este escenario se realizaron diversas pruebas de validación mediante las cuales se obtuvieron resultados satisfactorios. ABSTRACT Nowadays, society is shifting towards a growing interest and concern on health care. This phenomenon can be acknowledged by two facts: first, the increasing number of people practising some kind of healthy activity (sports, balanced diet, etc.). Secondly, the growing number of commercial wearable smart devices (smartwatches or bands) able to measure physiological parameters such as heart rate, breathing rate, distance or consumed calories. A large number of applications combining both facts are appearing. These applications are not only able to monitor the health status of the user, but also to provide recommendations about routines in order to improve the mentioned health status. In this context, wearable devices merged with the Internet of Things (IoT) paradigm enable the proliferation of new market segments for these health wearablebased applications. Furthermore, these applications can provide solutions for the elderly or baby care, in-hospital or in-home patient monitoring, security and defence fields or an unforeseen number of future applications. The introduced IoT paradigm can be developed with the usage of existing Wireless Sensor Networks (WSNs) by connecting the novel wearable devices to them. In this way, the migration of new users and actors to the IoT environment will be eased. However, a major issue appears in this environment: heterogeneity. In fact, there is a large number of operating systems, hardware platforms, communication and application protocols or programming languages, each of them with unique features. The main objective of this thesis is defining and implementing a solution for the intelligent service management in wearable and ubiquitous devices so as to solve the heterogeneity issues that are presented when dealing with interoperability and interconnectivity of devices and software of different nature. Additionally, a security schema based on trust domains is proposed as a solution to the privacy problems arising when private data (e.g., biomedical parameters or user identification) is broadcasted in a wireless network. The proposal has been made after a comprehensive state-of-the-art analysis, and includes the design of a Wearable Device Service Bus (WDSB) including the technologies collected in the requirement analysis (ESB, WWBAN, WSN and IoT). Applications are able to access the WSN services regardless of the platform and operating system where they are running. Besides, this proposal also includes the design of a Wearable Inter-Domain communication Protocols set (WIDP) which integrates lightweight protocols suitable to be used in low-capacities devices (REST, JSON, AMQP, CoAP, etc...). Furthermore, a security solution for service management based on a trustworthy domains model to deploy security services in WSNs has been designed. Although the proposal is a generic framework for applications based on services provided by wearable devices, an application scenario for testing purposes has been included. In this validation scenario it has been presented an autonomous physical condition performance system, based on a WSN, bringing the possibility to include several elements in an IoT scenario: a smartwatch, a physiological monitoring device and a smartphone. In summary, the general objective of this thesis is solving the heterogeneity and security challenges arising when developing applications for WSNs and wearable devices. As it has been presented in the thesis, the solution proposed has been successfully validated in a real scenario and the obtained results were satisfactory.