959 resultados para digital environment
Resumo:
The Cutri Formation’s, type location, exposed in the NW of Mallorca, Spain has previously been described by Álvaro et al., (1989) and further interpreted by Abbots (1989) unpublished PhD thesis as a base-of-slope carbonate apron. Incorporating new field and laboratory analysis this paper enhances this interpretation. From this analysis, it can be shown without reasonable doubt that the Cutri Formation was deposited in a carbonate base-of-slope environment on the palaeowindward side of a Mid-Jurassic Tethyan platform. Key evidence such as laterally extensive exposures, abundant deposits of calciturbidtes and debris flows amongst hemipelagic deposits strongly support this interpretation.
Resumo:
This article presents a methodological proposition to map the diversity of the audiovisual industry in the digital scenario by portraying the most important interactions between those who create, produce, distribute and disseminate audiovisual productions on line, paying special attention to powerful intermediaries and to small and medium independent agents. Taking as a point of departure a flexible understanding of social network analysis, the aim is to understand the structure of the audiovisual industry on the internet so that, taking into consideration a given sector, agents, their relations and the networks they give place to – as well as the structural conditions under which they operate – are studied. The aim is to answer questions such as: what is mapping, what is of interesting to map, how can it be done and what advantages and disadvantages the results will present.
Resumo:
The present thesis explores how interaction is initiated in multi-party meetings in Adobe Connect, 7.0, with a particular focus on how co-presence and mutual availability are established through the preambles of 18 meetings held in Spanish without a moderator. Taking Conversation Analysis (CA) as a methodological point of departure, this thesis comprises four different studies, each of them analyzing a particular phenomenon within the interaction of the preambles in a multimodal environment that allows simultaneous interaction through video, voice and text-chat. The first study (Artículo I) shows how participants solve jointly the issue of availability in a technological environment where being online is not necessarily understood as being available for communicating. The second study (Artículo II) focuses on the beginning of the audiovisual interaction; in particular on how participants check the right functioning of the audiovisual mode. The third study (Artículo III) explores silences within the interaction of the preamble. It shows that the length of gaps and lapses become a significant aspect the preambles and how they are connected to the issue of availability. Finally, the four study introduces the notion of modal alignment, an interactional phenomenon that systematically appears in the beginnings of the encounters, which seems to be used and understood as a strategy for the establishment of mutual availability and negotiation of the participation framework. As a whole, this research shows how participants, in order to establish mutual co-presence and availability, adapt to a particular technology in terms of participation management, deploying strategies and conveying successive actions which, as it is the case of the activation of their respective webcams, seem to be understood as predictable within the intricate process of establishing mutual availability before the meeting starts.
Resumo:
The continuous advancement in computing, together with the decline in its cost, has resulted in technology becoming ubiquitous (Arbaugh, 2008, Gros, 2007). Technology is growing and is part of our lives in almost every respect, including the way we learn. Technology helps to collapse time and space in learning. For example, technology allows learners to engage with their instructors synchronously, in real time and also asynchronously, by enabling sessions to be recorded. Space and distance is no longer an issue provided there is adequate bandwidth, which determines the most appropriate format such text, audio or video. Technology has revolutionised the way learners learn; courses are designed; and ‘lessons’ are delivered, and continues to do so. The learning process can be made vastly more efficient as learners have knowledge at their fingertips, and unfamiliar concepts can be easily searched and an explanation found in seconds. Technology has also enabled learning to be more flexible, as learners can learn anywhere; at any time; and using different formats, e.g. text or audio. From the perspective of the instructors and L&D providers, technology offers these same advantages, plus easy scalability. Administratively, preparatory work can be undertaken more quickly even whilst student numbers grow. Learners from far and new locations can be easily accommodated. In addition, many technologies can be easily scaled to accommodate new functionality and/ or other new technologies. ‘Designing and Developing Digital and Blended Learning Solutions’ (5DBS), has been developed to recognise the growing importance of technology in L&D. This unit contains four learning outcomes and two assessment criteria, which is the same for all other units, besides Learning Outcome 3 which has three assessment criteria. The four learning outcomes in this unit are: • Learning Outcome 1: Understand current digital technologies and their contribution to learning and development solutions; • Learning Outcome 2: Be able to design blended learning solutions that make appropriate use of new technologies alongside more traditional approaches; • Learning Outcome 3: Know about the processes involved in designing and developing digital learning content efficiently and what makes for engaging and effective digital learning content; • Learning Outcome 4: Understand the issues involved in the successful implementation of digital and blended learning solutions. Each learning outcome is an individual chapter and each assessment unit is allocated its own sections within the respective chapters. This first chapter addresses the first learning outcome, which has two assessment criteria: summarise the range of currently available learning technologies; critically assess a learning requirement to determine the contribution that could be made through the use of learning technologies. The introduction to chapter one is in Section 1.0. Chapter 2 discusses the design of blended learning solutions in consideration of how digital learning technologies may support face-to-face and online delivery. Three learning theory sets: behaviourism; cognitivism; constructivism, are introduced, and the implication of each set of theory on instructional design for blended learning discussed. Chapter 3 centres on how relevant digital learning content may be created. This chapter includes a review of the key roles, tools and processes that are involved in developing digital learning content. Finally, Chapter 4 concerns delivery and implementation of digital and blended learning solutions. This chapter surveys the key formats and models used to inform the configuration of virtual learning environment software platforms. In addition, various software technologies which may be important in creating a VLE ecosystem that helps to enhance the learning experience, are outlined. We introduce the notion of personal learning environment (PLE), which has emerged from the democratisation of learning. We also review the roles, tools, standards and processes that L&D practitioners need to consider within a delivery and implementation of digital and blended learning solution.
Resumo:
The Computing Division of the Business School at University College Worcester provides computing and information technology education to a range of undergraduate students. Topics include various approaches to programming, artificial intelligence, operating systems and digital technologies. Each of these has its own potentially conflicting requirements for a pedagogically sound programming environment. This paper describes an endeavor to develop a common programming paradigm across all topics. This involves the combined use of autonomous robots and Java simulations.
Resumo:
A lightweight Java application suite has been developed and deployed allowing collaborative learning between students and tutors at remote locations. Students can engage in group activities online and also collaborate with tutors. A generic Java framework has been developed and applied to electronics, computing and mathematics education. The applications are respectively: (a) a digital circuit simulator, which allows students to collaborate in building simple or complex electronic circuits; (b) a Java programming environment where the paradigm is behavioural-based robotics, and (c) a differential equation solver useful in modelling of any complex and nonlinear dynamic system. Each student sees a common shared window on which may be added text or graphical objects and which can then be shared online. A built-in chat room supports collaborative dialogue. Students can work either in collaborative groups or else in teams as directed by the tutor. This paper summarises the technical architecture of the system as well as the pedagogical implications of the suite. A report of student evaluation is also presented distilled from use over a period of twelve months. We intend this suite to facilitate learning between groups at one or many institutions and to facilitate international collaboration. We also intend to use the suite as a tool to research the establishment and behaviour of collaborative learning groups. We shall make our software freely available to interested researchers.
Resumo:
It is now generally accepted that cyber crime represents a big threat to organisations, and that they need to take appropriate action to protect their valuable information assets. However, current research shows that, although small businesses understand that they are potentially vulnerable, many are still not taking sufficient action to counteract the threat. Last year, the authors sought, through a more generalised but categorised attitudinal study, to explore the reasons why smaller SMEs in particular were reluctant to engage with accepted principles for protecting their data. The results showed that SMEs understood many of the issues. They were prepared to spend more but were particularly suspicious about spending on information assurance. The authors’ current research again focuses on SME attitudes but this time the survey asks only questions directly relating to information assurance and the standards available, in an attempt to try to understand exactly what is causing them to shy away from getting the badge or certificate that would demonstrate to customers and business partners that they take cyber security seriously. As with last year’s study, the results and analysis provide useful pointers towards the broader business environment changes that might cause SMEs to be more interested in working towards an appropriate cyber security standard.
Resumo:
Abstract : Images acquired from unmanned aerial vehicles (UAVs) can provide data with unprecedented spatial and temporal resolution for three-dimensional (3D) modeling. Solutions developed for this purpose are mainly operating based on photogrammetry concepts, namely UAV-Photogrammetry Systems (UAV-PS). Such systems are used in applications where both geospatial and visual information of the environment is required. These applications include, but are not limited to, natural resource management such as precision agriculture, military and police-related services such as traffic-law enforcement, precision engineering such as infrastructure inspection, and health services such as epidemic emergency management. UAV-photogrammetry systems can be differentiated based on their spatial characteristics in terms of accuracy and resolution. That is some applications, such as precision engineering, require high-resolution and high-accuracy information of the environment (e.g. 3D modeling with less than one centimeter accuracy and resolution). In other applications, lower levels of accuracy might be sufficient, (e.g. wildlife management needing few decimeters of resolution). However, even in those applications, the specific characteristics of UAV-PSs should be well considered in the steps of both system development and application in order to yield satisfying results. In this regard, this thesis presents a comprehensive review of the applications of unmanned aerial imagery, where the objective was to determine the challenges that remote-sensing applications of UAV systems currently face. This review also allowed recognizing the specific characteristics and requirements of UAV-PSs, which are mostly ignored or not thoroughly assessed in recent studies. Accordingly, the focus of the first part of this thesis is on exploring the methodological and experimental aspects of implementing a UAV-PS. The developed system was extensively evaluated for precise modeling of an open-pit gravel mine and performing volumetric-change measurements. This application was selected for two main reasons. Firstly, this case study provided a challenging environment for 3D modeling, in terms of scale changes, terrain relief variations as well as structure and texture diversities. Secondly, open-pit-mine monitoring demands high levels of accuracy, which justifies our efforts to improve the developed UAV-PS to its maximum capacities. The hardware of the system consisted of an electric-powered helicopter, a high-resolution digital camera, and an inertial navigation system. The software of the system included the in-house programs specifically designed for camera calibration, platform calibration, system integration, onboard data acquisition, flight planning and ground control point (GCP) detection. The detailed features of the system are discussed in the thesis, and solutions are proposed in order to enhance the system and its photogrammetric outputs. The accuracy of the results was evaluated under various mapping conditions, including direct georeferencing and indirect georeferencing with different numbers, distributions and types of ground control points. Additionally, the effects of imaging configuration and network stability on modeling accuracy were assessed. The second part of this thesis concentrates on improving the techniques of sparse and dense reconstruction. The proposed solutions are alternatives to traditional aerial photogrammetry techniques, properly adapted to specific characteristics of unmanned, low-altitude imagery. Firstly, a method was developed for robust sparse matching and epipolar-geometry estimation. The main achievement of this method was its capacity to handle a very high percentage of outliers (errors among corresponding points) with remarkable computational efficiency (compared to the state-of-the-art techniques). Secondly, a block bundle adjustment (BBA) strategy was proposed based on the integration of intrinsic camera calibration parameters as pseudo-observations to Gauss-Helmert model. The principal advantage of this strategy was controlling the adverse effect of unstable imaging networks and noisy image observations on the accuracy of self-calibration. The sparse implementation of this strategy was also performed, which allowed its application to data sets containing a lot of tie points. Finally, the concepts of intrinsic curves were revisited for dense stereo matching. The proposed technique could achieve a high level of accuracy and efficiency by searching only through a small fraction of the whole disparity search space as well as internally handling occlusions and matching ambiguities. These photogrammetric solutions were extensively tested using synthetic data, close-range images and the images acquired from the gravel-pit mine. Achieving absolute 3D mapping accuracy of 11±7 mm illustrated the success of this system for high-precision modeling of the environment.
Resumo:
A Cultura Digital é uma realidade do século XXI, onde as relações humanas são fortemente mediadas pelas tecnologias e pelo digital. O Digital tem mudado o comportamento das pessoas e influenciado o seu meio cultural. O conceito de Cultura digital inclui saber onde encontrar informação, ferramentas e sistemas, necessários para cumprir uma determinada tarefa de forma não só eficaz, mas também mais eficiente. As competências para explorar as tecnologias que o permitam, são cada vez mais exigidas na interação em sociedade. Tendo em consideração estudos existentes associados com as competências TIC, nomeadamente os propostos pela UNESCO e pelo Executivo Angolano no seu Plano Nacional de Desenvolvimento para 2013-2017, em que está enquadrada a iniciativa da Rede de Mediatecas de Angola, foi tomada a Cultural Digital e a sua promoção no ambito das Mediatecas. Numa primeira avaliação desde da abertura da Mediateca em Fevereiro de 2014, verificamos uma fraca adesão dos professores na utilização dos serviços da Mediateca do Huambo – Angola. Coloca-se neste contexto a questão do porquê que é que os professores não exploram as TIC para as atividades docentes ou da sua aprendizagem? Desta forma, é considerada a questão de investigação: Quais as estratégias a desenvolver para incrementar a exploração das TIC pelos professores e como a Mediateca do Huambo pode servir de espaço para a promoção da Cultura Digital. Foi realizado um estudo de caso, no contexto da Mediateca do Huambo, em que foi efetuada uma recolha de dados pela (a) aplicação de questionários aos professores do 1º e 2º Ciclo do Ensino Secundário do Município do Huambo para mapear as competências TIC dos professores segundo os padrões da UNESCO e o apuramento da formação e uso das TIC em contexto educativo, (b) pela aplicação de um conjunto de tarefas com o objetivo de verificar quais as competências digitais dos professores e por último (c), a implementação do projeto-piloto denominado “ Mediateca +Escola” com o objetivo de levar à Mediateca professores e alunos para desenvolver um projeto TIC onde os professores teriam uma oportunidade de demostrar as competências que afirmavam possuir. Como resultado final obtido neste estudo de caso podemos concluir que 71,27% indicanos que os professores têm a perceção que necessitam de mais formação e desenvolvimento de competências TIC, 27,50% confirma que os professores creem que têm confiança nas TIC. Em face das respostas com valores negativos que apontam para a necessidade de delinear estratégia para capacitar este grupo em competências TIC de forma a incluí-los na sociedade em rede providos de uma Cultura Digital foi proposto o modelo que permite validar os dados obtidos pelo questionário permitindo também delinear estratégias com vista à promoção da Cultura Digital.
Resumo:
Contemporary integrated circuits are designed and manufactured in a globalized environment leading to concerns of piracy, overproduction and counterfeiting. One class of techniques to combat these threats is circuit obfuscation which seeks to modify the gate-level (or structural) description of a circuit without affecting its functionality in order to increase the complexity and cost of reverse engineering. Most of the existing circuit obfuscation methods are based on the insertion of additional logic (called “key gates”) or camouflaging existing gates in order to make it difficult for a malicious user to get the complete layout information without extensive computations to determine key-gate values. However, when the netlist or the circuit layout, although camouflaged, is available to the attacker, he/she can use advanced logic analysis and circuit simulation tools and Boolean SAT solvers to reveal the unknown gate-level information without exhaustively trying all the input vectors, thus bringing down the complexity of reverse engineering. To counter this problem, some ‘provably secure’ logic encryption algorithms that emphasize methodical selection of camouflaged gates have been proposed previously in literature [1,2,3]. The contribution of this paper is the creation and simulation of a new layout obfuscation method that uses don't care conditions. We also present proof-of-concept of a new functional or logic obfuscation technique that not only conceals, but modifies the circuit functionality in addition to the gate-level description, and can be implemented automatically during the design process. Our layout obfuscation technique utilizes don’t care conditions (namely, Observability and Satisfiability Don’t Cares) inherent in the circuit to camouflage selected gates and modify sub-circuit functionality while meeting the overall circuit specification. Here, camouflaging or obfuscating a gate means replacing the candidate gate by a 4X1 Multiplexer which can be configured to perform all possible 2-input/ 1-output functions as proposed by Bao et al. [4]. It is important to emphasize that our approach not only obfuscates but alters sub-circuit level functionality in an attempt to make IP piracy difficult. The choice of gates to obfuscate determines the effort required to reverse engineer or brute force the design. As such, we propose a method of camouflaged gate selection based on the intersection of output logic cones. By choosing these candidate gates methodically, the complexity of reverse engineering can be made exponential, thus making it computationally very expensive to determine the true circuit functionality. We propose several heuristic algorithms to maximize the RE complexity based on don’t care based obfuscation and methodical gate selection. Thus, the goal of protecting the design IP from malicious end-users is achieved. It also makes it significantly harder for rogue elements in the supply chain to use, copy or replicate the same design with a different logic. We analyze the reverse engineering complexity by applying our obfuscation algorithm on ISCAS-85 benchmarks. Our experimental results indicate that significant reverse engineering complexity can be achieved at minimal design overhead (average area overhead for the proposed layout obfuscation methods is 5.51% and average delay overhead is about 7.732%). We discuss the strengths and limitations of our approach and suggest directions that may lead to improved logic encryption algorithms in the future. References: [1] R. Chakraborty and S. Bhunia, “HARPOON: An Obfuscation-Based SoC Design Methodology for Hardware Protection,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 28, no. 10, pp. 1493–1502, 2009. [2] J. A. Roy, F. Koushanfar, and I. L. Markov, “EPIC: Ending Piracy of Integrated Circuits,” in 2008 Design, Automation and Test in Europe, 2008, pp. 1069–1074. [3] J. Rajendran, M. Sam, O. Sinanoglu, and R. Karri, “Security Analysis of Integrated Circuit Camouflaging,” ACM Conference on Computer Communications and Security, 2013. [4] Bao Liu, Wang, B., "Embedded reconfigurable logic for ASIC design obfuscation against supply chain attacks,"Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,6, 24-28 March 2014.
Resumo:
Most cancer-related deaths are due to metastasis formation, the ability of cancer cells to break away from the primary tumor site, transmigrate through the endothelium, and form secondary tumors in distant areas. Many studies have identified links between the mechanical properties of the cellular microenvironment and the behavior of cancer cells. Cells may experience heterogeneous microenvironments of varying stiffness during tumor progression, transmigration, and invasion into the basement membrane. In addition to mechanical factors, the localization of RNAs to lamellipodial regions has been proposed to play an important part in metastasis. This dissertation provides a quantitative evaluation of the biophysical effects on cancer cell transmigration and RNA localization. In the first part of this dissertation, we sought to compare cancer cell and leukocyte transmigration and investigate the impact of matrix stiffness on transmigration process. We found that cancer cell transmigration includes an additional step, ‘incorporation’, into the endothelial cell (EC) monolayer. During this phase, cancer cells physically displace ECs and spread into the monolayer. Furthermore, the effects of subendothelial matrix stiffness and endothelial activation on cancer cell incorporation are cell-specific, a notable difference from the process by which leukocytes transmigrate. Collectively, our results provide mechanistic insights into tumor cell extravasation and demonstrate that incorporation into the endothelium is one of the earliest steps. In the next part of this work, we investigated how matrix stiffness impacts RNA localization and its relevance to cancer metastasis. In migrating cells, the tumor suppressor protein, adenomatous polyposis coli (APC) targets RNAs to cellular protrusions. We observed that increasing stiffness promotes the peripheral localization of these APC-dependent RNAs and that cellular contractility plays a role in regulating this pathway. We next investigated the mechanism underlying the effect of substrate stiffness and cellular contractility. We found that contractility drives localization of RNAs to protrusions through modulation of detyrosinated microtubules, a network of modified microtubules that associate with, and are required for localization of APC-dependent RNAs. These results raise the possibility that as the matrix environment becomes stiffer during tumor progression, it promotes the localization of RNAs and ultimately induces a metastatic phenotype.
Resumo:
Este artículo resume el proceso de implementación del Laboratorio de Televisión Digital (DTV) de la Universidad de Cuenca, que surge como un entorno confiable de experimentación e investigación que hace uso de las características asociadas al estándar ISDB-Tb adoptado por Ecuador en el año 2010 para la transmisión de señales de televisión abierta. El objetivo de este artículo es documentar los aspectos que se han considerado para simular un escenario real en el que un Transport Stream (TS) formado por contenido audiovisual y aplicaciones interactivas, primero se genera, para luego transmitirse a través del canal de comunicaciones, y finalmente ser recibido en una televisión con receptor ISDB-Tb. Así, se facilita el desarrollo y la experimentación de nuevos servicios aprovechando el nuevo formato de DTV.
Resumo:
Maintaining accessibility to and understanding of digital information over time is a complex challenge that often requires contributions and interventions from a variety of individuals and organizations. The processes of preservation planning and evaluation are fundamentally implicit and share similar complexity. Both demand comprehensive knowledge and understanding of every aspect of to-be-preserved content and the contexts within which preservation is undertaken. Consequently, means are required for the identification, documentation and association of those properties of data, representation and management mechanisms that in combination lend value, facilitate interaction and influence the preservation process. These properties may be almost limitless in terms of diversity, but are integral to the establishment of classes of risk exposure, and the planning and deployment of appropriate preservation strategies. We explore several research objectives within the course of this thesis. Our main objective is the conception of an ontology for risk management of digital collections. Incorporated within this are our aims to survey the contexts within which preservation has been undertaken successfully, the development of an appropriate methodology for risk management, the evaluation of existing preservation evaluation approaches and metrics, the structuring of best practice knowledge and lastly the demonstration of a range of tools that utilise our findings. We describe a mixed methodology that uses interview and survey, extensive content analysis, practical case study and iterative software and ontology development. We build on a robust foundation, the development of the Digital Repository Audit Method Based on Risk Assessment. We summarise the extent of the challenge facing the digital preservation community (and by extension users and creators of digital materials from many disciplines and operational contexts) and present the case for a comprehensive and extensible knowledge base of best practice. These challenges are manifested in the scale of data growth, the increasing complexity and the increasing onus on communities with no formal training to offer assurances of data management and sustainability. These collectively imply a challenge that demands an intuitive and adaptable means of evaluating digital preservation efforts. The need for individuals and organisations to validate the legitimacy of their own efforts is particularly prioritised. We introduce our approach, based on risk management. Risk is an expression of the likelihood of a negative outcome, and an expression of the impact of such an occurrence. We describe how risk management may be considered synonymous with preservation activity, a persistent effort to negate the dangers posed to information availability, usability and sustainability. Risk can be characterised according to associated goals, activities, responsibilities and policies in terms of both their manifestation and mitigation. They have the capacity to be deconstructed into their atomic units and responsibility for their resolution delegated appropriately. We continue to describe how the manifestation of risks typically spans an entire organisational environment, and as the focus of our analysis risk safeguards against omissions that may occur when pursuing functional, departmental or role-based assessment. We discuss the importance of relating risk-factors, through the risks themselves or associated system elements. To do so will yield the preservation best-practice knowledge base that is conspicuously lacking within the international digital preservation community. We present as research outcomes an encapsulation of preservation practice (and explicitly defined best practice) as a series of case studies, in turn distilled into atomic, related information elements. We conduct our analyses in the formal evaluation of memory institutions in the UK, US and continental Europe. Furthermore we showcase a series of applications that use the fruits of this research as their intellectual foundation. Finally we document our results in a range of technical reports and conference and journal articles. We present evidence of preservation approaches and infrastructures from a series of case studies conducted in a range of international preservation environments. We then aggregate this into a linked data structure entitled PORRO, an ontology relating preservation repository, object and risk characteristics, intended to support preservation decision making and evaluation. The methodology leading to this ontology is outlined, and lessons are exposed by revisiting legacy studies and exposing the resource and associated applications to evaluation by the digital preservation community.
Resumo:
A utilização das tecnologias é considerada um meio eficaz para trabalhar conteúdos académicos com alunos com Perturbações do Espetro do Autismo (PEA) possibilitando a criação de ambientes criativos e construtivos onde se podem desenvolver atividades diferenciadas, significativas e de qualidade. Contudo, o desenvolvimento de aplicações tecnológicas para crianças e jovens com PEA continua a merecer pouca atenção, nomeadamente no que respeita à promoção do raciocínio dedutivo, apesar desta ser uma área de grande interesse para indivíduos com esta perturbação. Para os alunos com PEA, o desenvolvimento do raciocínio matemático torna-se crucial, considerando a importância destas competências para o sucesso de uma vida autónoma. Estas evidências revelam o contributo inovador que o ambiente de aprendizagem descrito nesta comunicação poderá dar nesta área. O desenvolvimento deste ambiente começou por uma etapa de criação e validação de um modelo que permitiu especificar e prototipar a solução desenvolvida que oferece modalidades de adaptação dinâmica das atividades propostas ao perfil do utilizador, procurando promover o desenvolvimento do raciocínio matemático (indutivo e dedutivo). Considerando a heterogeneidade das PEA, o ambiente desenvolvido baseia-se em modalidades de adaptação dinâmica e em atividades ajustadas ao perfil dos utilizadores. Nesta comunicação procurámos dar a conhecer o trabalho de investigação já desenvolvido, bem como perspetivar a continuidade do trabalho a desenvolver.
Resumo:
This dissertation studies technological change in the context of energy and environmental economics. Technology plays a key role in reducing greenhouse gas emissions from the transportation sector. Chapter 1 estimates a structural model of the car industry that allows for endogenous product characteristics to investigate how gasoline taxes, R&D subsidies and competition affect fuel efficiency and vehicle prices in the medium-run, both through car-makers' decisions to adopt technologies and through their investments in knowledge capital. I use technology adoption and automotive patents data for 1986-2006 to estimate this model. I show that 92% of fuel efficiency improvements between 1986 and 2006 were driven by technology adoption, while the role of knowledge capital is largely to reduce the marginal production costs of fuel-efficient cars. A counterfactual predicts that an additional $1/gallon gasoline tax in 2006 would have increased the technology adoption rate, and raised average fuel efficiency by 0.47 miles/gallon, twice the annual fuel efficiency improvement in 2003-2006. An R&D subsidy that would reduce the marginal cost of knowledge capital by 25% in 2006 would have raised investment in knowledge capital. This subsidy would have raised fuel efficiency only by 0.06 miles/gallon in 2006, but would have increased variable profits by $2.3 billion over all firms that year. Passenger vehicle fuel economy standards in the United States will require substantial improvements in new vehicle fuel economy over the next decade. Economic theory suggests that vehicle manufacturers adopt greater fuel-saving technologies for vehicles with larger market size. Chapter 2 documents a strong connection between market size, measured by sales, and technology adoption. Using variation consumer demographics and purchasing pattern to account for the endogeneity of market size, we find that a 10 percent increase in market size raises vehicle fuel efficiency by 0.3 percent, as compared to a mean improvement of 1.4 percent per year over 1997-2013. Historically, fuel price and demographic-driven market size changes have had large effects on technology adoption. Furthermore, fuel taxes would induce firms to adopt fuel-saving technologies on their most efficient cars, thereby polarizing the fuel efficiency distribution of the new vehicle fleet.