942 resultados para Logging
Resumo:
The background of this work is to suggest ways to take care of branches and tops of trees that today are left out in the north of Sweden after logging because it has to low value to be worth transporting. A solution to this is to place small chemical factories in the sparsely populated areas in the inland of Norrland that can take care of the forest residues and break it into valuable chemicals directly in the forest an then transport it to a market. The aim of this work was to find out if it´s a good idea to invest in these small chemical factories in the north of Sweden. This study has been carried out using literature study and interviews of key people. The largest part of the result comes from the interviews. The results of this study show that the small chemical factory is a good idea. Forest residues contains many valuable substances that should be greater used today. The results section of the report describes various factor that are crucial for the small chemical factory and these are: the products that can be produced, what technology that is suitable, if there is an market, who should be taking care of the factory and how the inland endurance will be affected. The conclusions that can be drawn from the study is that the small chemical factory should produce high-grade-sary chemicals directed at the chemical market. It may also be noted that there is existing technology that can be used in the factories, what has been done in the laboratories today can be implemented in the factory. The market will obviously depend on which product that will be produces, but finding a suitable market should not be impossible. The inland endurance will be positively impacted, among other things, the social endurance is enhances when these small chemical factories creates job opportunities in the inland and it can lead to decreasing the emigration.
Resumo:
This is the first time a multidisciplinary team has employed an iterative co-design method to determine the ergonomic layout of an emergency ambulance treatment space. This process allowed the research team to understand how treatment protocols were performed and developed analytical tools to reach an optimum configuration towards ambulance design standardisation. Fusari conducted participatory observations during 12-hour shifts with front-line ambulance clinicians, hospital staff and patients to understand the details of their working environments whilst on response to urgent and emergency calls. A simple yet accurate 1:1 mock-up of the existing ambulance was built for detailed analysis of these procedures through simulations. Paramedics were called in to participate in interviews and role-playing inside the model to recreate tasks, how they are performed, the equipment used and to understand the limitations of the current ambulance. The use of Link Analysis distilled 5 modes of use. In parallel, an exhaustive audit of all equipment and consumables used in ambulances was performed (logging and photography) to define space use. These developed 12 layout options for refinement and CAD modelling and presented back to paramedics. The preferred options and features were then developed into a full size test rig and appearance model. Two key studies informed the process. The 2005 National Patient Safety Agency funded study “Future Ambulances” outlined 9 design challenges for future standardisation of emergency vehicles and equipment. Secondly, the 2007 EPSRC funded “Smart Pods” project investigated a new system of mobile urgent and emergency medicine to treat patients in the community. A full-size mobile demonstrator unit featuring the evidence-based ergonomic layout was built for clinical tests through simulated emergency scenarios. Results from clinical trials clearly show that the new layout improves infection control, speeds up treatment, and makes it easier for ambulance crews to follow correct clinical protocols.
Resumo:
Pour rester compétitives, les entreprises forestières cherchent à contrôler leurs coûts d’approvisionnement. Les abatteuses-façonneuses sont pourvues d’ordinateurs embarqués qui permettent le contrôle et l’automatisation de certaines fonctions. Or, ces technologies ne sont pas couramment utilisées et sont dans le meilleur des cas sous-utilisées. Tandis que l’industrie manifeste un intérêt grandissant pour l’utilisation de ces ordinateurs, peu de travaux de recherche ont porté sur l’apport en productivité et en conformité aux spécifications de façonnage découlant de l’usage de ces systèmes. L’objectif de l’étude était de mesurer les impacts des trois degrés d’automatisation (manuel, semi-automatique et automatique) sur la productivité (m3/hmp) et le taux de conformité des longueurs et des diamètre d’écimage des billes façonnées (%). La collecte de données s’est déroulée dans les secteurs de récolte de Produits forestiers résolu au nord du Lac St-Jean entre les mois de janvier et d’août 2015. Un dispositif en blocs complets a été mis en place pour chacun des cinq opérateurs ayant participé à l’étude. Un seuil de 5 % a été employé pour la réalisation de l’analyse des variances, après la réalisation de contrastes. Un seul cas a présenté un écart significatif de productivité attribuable au changement du degré d’automatisation employé, tandis qu’aucune différence significative n’a été détectée pour la conformité des diamètres d’écimage; des tendances ont toutefois été constatées. Les conformités de longueur obtenues par deux opérateurs ont présenté des écarts significatifs. Ceux-ci opérant sur deux équipements distincts, cela laisse entrevoir l’impact que peut aussi avoir l’opérateur sur le taux de conformité des longueurs.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Current Ambient Intelligence and Intelligent Environment research focuses on the interpretation of a subject’s behaviour at the activity level by logging the Activity of Daily Living (ADL) such as eating, cooking, etc. In general, the sensors employed (e.g. PIR sensors, contact sensors) provide low resolution information. Meanwhile, the expansion of ubiquitous computing allows researchers to gather additional information from different types of sensor which is possible to improve activity analysis. Based on the previous research about sitting posture detection, this research attempts to further analyses human sitting activity. The aim of this research is to use non-intrusive low cost pressure sensor embedded chair system to recognize a subject’s activity by using their detected postures. There are three steps for this research, the first step is to find a hardware solution for low cost sitting posture detection, second step is to find a suitable strategy of sitting posture detection and the last step is to correlate the time-ordered sitting posture sequences with sitting activity. The author initiated a prototype type of sensing system called IntelliChair for sitting posture detection. Two experiments are proceeded in order to determine the hardware architecture of IntelliChair system. The prototype looks at the sensor selection and integration of various sensor and indicates the best for a low cost, non-intrusive system. Subsequently, this research implements signal process theory to explore the frequency feature of sitting posture, for the purpose of determining a suitable sampling rate for IntelliChair system. For second and third step, ten subjects are recruited for the sitting posture data and sitting activity data collection. The former dataset is collected byasking subjects to perform certain pre-defined sitting postures on IntelliChair and it is used for posture recognition experiment. The latter dataset is collected by asking the subjects to perform their normal sitting activity routine on IntelliChair for four hours, and the dataset is used for activity modelling and recognition experiment. For the posture recognition experiment, two Support Vector Machine (SVM) based classifiers are trained (one for spine postures and the other one for leg postures), and their performance evaluated. Hidden Markov Model is utilized for sitting activity modelling and recognition in order to establish the selected sitting activities from sitting posture sequences.2. After experimenting with possible sensors, Force Sensing Resistor (FSR) is selected as the pressure sensing unit for IntelliChair. Eight FSRs are mounted on the seat and back of a chair to gather haptic (i.e., touch-based) posture information. Furthermore, the research explores the possibility of using alternative non-intrusive sensing technology (i.e. vision based Kinect Sensor from Microsoft) and find out the Kinect sensor is not reliable for sitting posture detection due to the joint drifting problem. A suitable sampling rate for IntelliChair is determined according to the experiment result which is 6 Hz. The posture classification performance shows that the SVM based classifier is robust to “familiar” subject data (accuracy is 99.8% with spine postures and 99.9% with leg postures). When dealing with “unfamiliar” subject data, the accuracy is 80.7% for spine posture classification and 42.3% for leg posture classification. The result of activity recognition achieves 41.27% accuracy among four selected activities (i.e. relax, play game, working with PC and watching video). The result of this thesis shows that different individual body characteristics and sitting habits influence both sitting posture and sitting activity recognition. In this case, it suggests that IntelliChair is suitable for individual usage but a training stage is required.
Resumo:
Americans are accustomed to a wide range of data collection in their lives: census, polls, surveys, user registrations, and disclosure forms. When logging onto the Internet, users’ actions are being tracked everywhere: clicking, typing, tapping, swiping, searching, and placing orders. All of this data is stored to create data-driven profiles of each user. Social network sites, furthermore, set the voluntarily sharing of personal data as the default mode of engagement. But people’s time and energy devoted to creating this massive amount of data, on paper and online, are taken for granted. Few people would consider their time and energy spent on data production as labor. Even if some people do acknowledge their labor for data, they believe it is accessory to the activities at hand. In the face of pervasive data collection and the rising time spent on screens, why do people keep ignoring their labor for data? How has labor for data been become invisible, as something that is disregarded by many users? What does invisible labor for data imply for everyday cultural practices in the United States? Invisible Labor for Data addresses these questions. I argue that three intertwined forces contribute to framing data production as being void of labor: data production institutions throughout history, the Internet’s technological infrastructure (especially with the implementation of algorithms), and the multiplication of virtual spaces. There is a common tendency in the framework of human interactions with computers to deprive data and bodies of their materiality. My Introduction and Chapter 1 offer theoretical interventions by reinstating embodied materiality and redefining labor for data as an ongoing process. The middle Chapters present case studies explaining how labor for data is pushed to the margin of the narratives about data production. I focus on a nationwide debate in the 1960s on whether the U.S. should build a databank, contemporary Big Data practices in the data broker and the Internet industries, and the group of people who are hired to produce data for other people’s avatars in the virtual games. I conclude with a discussion on how the new development of crowdsourcing projects may usher in the new chapter in exploiting invisible and discounted labor for data.
Resumo:
A number of fish species once native only to Lakes Victoria and Kyoga have considerably declined over the years, and in some cases disappeared, due to over exploitation, introduction of exotic species especially the Nile Perch, and environmental degradation resulting from human activities. Some of the species have been observed to survive in satellite lakes in the Victoria and Kyoga Lake basins. The Nabugabo satellite lakes contain the endemic Cichlid fish species, Oreochromis esculentus and two haplochromine species previously found only in Lake Nabugabo. There is, therefore, need to conserve these species by ensuring sustainable use and management of the resources. The study revealed that the Nabugabo lakes provide a range of socio-economic benefits accruing from fishing, farming, logging, resort beach development and watering of animals. However, although these activities impact on the lakes ecosystems, the participation of resource users in management is limited because of the weak local management institutions operating on the lakes, hence the need to strengthen them through capacity building. It is recommended that Government should work jointly with the beach committees and fishing community in a participatory way to eliminate the use of destructive fishing practices and control the other environment degrading activities.
Resumo:
Australian forest industries have a long history of export trade of a wide range of products from woodchips (for paper manufacturing), sandalwood (essential oils, carving and incense) to high value musical instruments, flooring and outdoor furniture. For the high value group, fluctuating environmental conditions brought on by changes in temperature and relative humidity, can lead to performance problems due to consequential swelling, shrinkage and/or distortion of the wood elements. A survey determined the types of value-added products exported, including species and dimensions packaging used and export markets. Data loggers were installed with shipments to monitor temperature and relative humidity conditions. These data were converted to timber equilibrium moisture content values to provide an indication of the environment that the wood elements would be acclimatising to. The results of the initial survey indicated that primary high value wood export products included guitars, flooring, decking and outdoor furniture. The destination markets were mainly located in the northern hemisphere, particularly the United States of America, China, Hong Kong, Europe (including the United Kingdom), Japan, Korea and the Middle East. Other regions importing Australian-made wooden articles were south-east Asia, New Zealand and South Africa. Different timber species have differing rates of swelling and shrinkage, so the types of timber were also recorded during the survey. Results from this work determined that the major species were ash-type eucalypts from south-eastern Australia (commonly referred to in the market as Tasmanian oak), jarrah from Western Australia, spotted gum, hoop pine, white cypress, black butt, brush box and Sydney blue gum from Queensland and New South Wales. The environmental conditions data indicated that microclimates in shipping containers can fluctuate extensively during shipping. Conditions at the time of manufacturing were usually between 10 and 12% equilibrium moisture content, however conditions during shipping could range from 5 (very dry) to 20% (very humid). The packaging systems incorporated were reported to be efficient at protecting the wooden articles from damage during transit. The research highlighted the potential risk for wood components to ‘move’ in response to periods of drier or more humid conditions than those at the time of manufacturing, and the importance of engineering a packaging system that can account for the environmental conditions experienced in shipping containers. Examples of potential dimensional changes in wooden components were calculated based on published unit shrinkage data for key species and the climatic data returned from the logging equipment. The information highlighted the importance of good design to account for possible timber movement during shipping. A timber movement calculator was developed to allow designers to input component species, dimensions, site of manufacture and destination, to see validate their product design.
Resumo:
Tropical forests have decreased drastically especially in the Peruvian Amazon. In Peru deforestation is caused especially by migrant people; building of houses and infrastructure, clearing land for agricultural purposes and illegal logging and mining. Deforestation results in hindering ecosystem vitality, boosting climate change and decreasing livelihood possibilities. As a counterpoint to cutting down trees there is reforestation, which refers to re-establishment of forest cover. Deforestation and reforestation can be analysed in the light of Forest Transition theory. According to it, due to economic growth, the amount forest cover first diminishes but then starts to increase as the economy in general strengthens. Thus, the research framework is set to this theory. In this study the focus is on analysing socioeconomically sustainable reforestation possibilities in the community of Tingana, Peru. It is situated in a municipal conservation area around which deforestation has been heavy. Land cover change is analysed from LandsatTM satellite images covering a 15 year time period, 1995–2010, in the surroundings of the study area. Semi-structured interviews have been done with a sample size of 25 people and shed light on the perspectives on forests, reforestation and economical activities. The synthesis created from the two methods gives information about the possibilities to enforce reforestation in Tingana and the phase of forest transition in the area. The results show that forest cover has decreased around the surroundings of Tingana leaving the conservation area isolated from larger forest areas. Knowing that forest cover has also decreased inside the conservation area due to agricultural expansion it is certain that fragmentation harms biodiversity causing changes in local climate, which can have knock-on effects for farming and local livelihoods. Therefore reforestation is welcomed when it ensures both conservation and financial benefits and when carried out on locals’ terms. Regarding conservation and incomes the best option would be to plant native timber species together with fruit production species to create agroforestry systems. Economically the community should aim towards an economy that relies on ecotourism as it already practiced in the area. Reforestation could increase ecotourism, which then could in turn increase reforestation via revenues. Regarding forest transition it is likely that forest re-establishment will occur if reforestation along with ecotourism is implemented on long time scale.
Resumo:
Although the value of primary forests for biodiversity conservation is well known, the potential biodiversity and conservation value of regenerating forests remains controversial. Many factors likely contribute to this, including: 1. the variable ages of regenerating forests being studied (often dominated by relatively young regenerating forests); 2. the potential for confounding on-going human disturbance (such as logging and hunting); 3. the relatively low number of multi-taxa studies; 4. the lack of studies that directly compare different historic disturbances within the same location; 5. contrasting patterns from different survey methodologies and the paucity of knowledge on the impacts across different vertical levels of rainforest biodiversity (often due to a lack of suitable methodologies available to assess them). We also know relatively little as to how biodiversity is affected by major current impacts, such as unmarked rainforest roads, which contribute to this degradation of habitat and fragmentation. This thesis explores the potential biodiversity value of regenerating rainforests under the best of scenarios and seeks to understand more about the impact of current human disturbance to biodiversity; data comes from case studies from the Manu and Sumaco Biosphere Reserves in the Western Amazon. Specifically, I compare overall biodiversity and conservation value of a best case regenerating rainforest site with a selection of well-studied primary forest sites and with predicted species lists for the region; including a focus on species of key conservation concern. I then investigate the biodiversity of the same study site in reference to different types of historic anthropogenic disturbance. Following this I investigate the impacts to biodiversity from an unmarked rainforest road. In order to understand more about the differential effects of habitat disturbance on arboreal diversity I directly assess how patterns of butterfly biodiversity vary between three vertical strata. Although assessments within the canopy have been made for birds, invertebrates and bats, very few studies have successfully targeted arboreal mammals. I therefore investigate the potential of camera traps for inventorying arboreal mammal species in comparison with traditional methodologies. Finally, in order to investigate the possibility that different survey methodologies might identify different biodiversity patterns in habitat disturbance assessments, I investigate whether two different but commonly used survey methodologies used to assess amphibians, indicate the same or different responses of amphibian biodiversity to historic habitat change by people. The regenerating rainforest study site contained high levels of species richness; both in terms of alpha diversity found in nearby primary forest areas (87% ±3.5) and in terms of predicted primary forest diversity from the region (83% ±6.7). This included 89% (39 out of 44) of the species of high conservation concern predicted for the Manu region. Faunal species richness in once completely cleared regenerating forest was on average 13% (±9.8) lower than historically selectively logged forest. The presence of the small unmarked road significantly altered levels of faunal biodiversity for three taxa, up to and potentially beyond 350m into the forest interior. Most notably, the impact on biodiversity extended to at least 32% of the whole reserve area. The assessment of butterflies across strata showed that different vertical zones within the same rainforest responded differently in areas with different historic human disturbance. A comparison between forest regenerating after selective logging and forest regenerating after complete clearance, showed that there was a 17% greater reduction in canopy species richness in the historically cleared forest compared with the terrestrial community. Comparing arboreal camera traps with traditional ground-based techniques suggests that camera traps are an effective tool for inventorying secretive arboreal rainforest mammal communities and detect a higher number of cryptic species. Finally, the two survey methodologies used to assess amphibian communities identified contrasting biodiversity patterns in a human modified rainforest; one indicated biodiversity differences between forests with different human disturbance histories, whereas the other suggested no differences between forest disturbance types. Overall, in this thesis I find that the conservation and biodiversity value of regenerating and human disturbed tropical forest can potentially contribute to rainforest biodiversity conservation, particularly in the best of circumstances. I also highlight the importance of utilising appropriate study methodologies that to investigate these three-dimensional habitats, and contribute to the development of methodologies to do so. However, care should be taken when using different survey methodologies, which can provide contrasting biodiversity patterns in response to human disturbance.
Resumo:
Los metadatos son llaves para la categorización de información en los servicios digitales. En esencia se trata de catalogación y clasificación de información y su uso constituye una de las mejores prácticas en la gestión de información y de la misma manera que los catálogos y OPAC’s impacta en mejores servicios a los usuarios sean éstos de bibiotecas virtuales, e-gobierno, e-aprendizaje o e-salud; asimismo son la base para futuros desarrollos como la Web Semántica.El tema es de particular interés a los bibliotecarios ya que como organizadores del conocimiento conocen los esquemas de clasificación, reglas de registro de datos como las AACR2 y vocabularios especializados. En este documento se manejan algunos conceptos básicos al respecto y se comentan los pasos que América Latina está dando en este tema global.
Resumo:
Dehram group includes Faraghan, Dalan and Kangan formations. Kangan formation ages lower terias. That is one of the important reservoir rocks of southern Iran and Persian Gulf. In this research Kangan formation is studied in two A and B wells. Based on 75 studies on thin section, four carbonate litho acies association A, B, C, D with 12 subfacies are identified. A lithofacies association includes 4 subfacies: A1, A2, A3 and A4. B lithofacies association consists of 3 subfacies: B1, B2 and B3. C lithofacies association consists of 3 subfacies: C1, C2, C3 and D lithofacies association includes 2 subfacies: D1 and D2. On the base of studies lithofacies association of Kangan formations are formed in 3 environments of: Tidal Flat, Lagoon and Barrier Shore Complex in a Carbonated Platform Ramp type. Diagenetic processes have effected this formation. The most important Diagenetic processes are: Cementation, Anhydritization, Micrization, Neomorphism, Bioturbation, Dissolution, Compaction, Dolomitization and Porosity. Sequence staratigraphy studies were performed base on the vertical and horizontal relationship of lithofacies association and well logging in gamma ray and sonic type that causes the identification of two sedimentary sequences: First sedimentary sequence includes: Transgressive System Tract (TST) and High Stand System Tract (HST). The lower boundary of this sequence is in Sequence Boundary 1 (SB1) which shows unconformities of Dalan and Kangan that are Permian-terias unconformities. The upper boundary is in Sequence Boundary 2 (SB2) type that is identified by carbonate facies associated by anhydrite nodular. Second sedimentary sequence includes: TST and HST. Lower and upper boundaries of these sequences are both in SB2 type. The lower and upper boundary is made of carbonate facies with anhydrite nodular.
Resumo:
A smart solar photovoltaic grid system is an advent of innovation coherence of information and communications technology (ICT) with power systems control engineering via the internet [1]. This thesis designs and demonstrates a smart solar photovoltaic grid system that is selfhealing, environmental and consumer friendly, but also with the ability to accommodate other renewable sources of energy generation seamlessly, creating a healthy competitive energy industry and optimising energy assets efficiency. This thesis also presents the modelling of an efficient dynamic smart solar photovoltaic power grid system by exploring the maximum power point tracking efficiency, optimisation of the smart solar photovoltaic array through modelling and simulation to improve the quality of design for the solar photovoltaic module. In contrast, over the past decade quite promising results have been published in literature, most of which have not addressed the basis of the research questions in this thesis. The Levenberg-Marquardt and sparse based algorithms have proven to be very effective tools in helping to improve the quality of design for solar photovoltaic modules, minimising the possible relative errors in this thesis. Guided by theoretical and analytical reviews in literature, this research has carefully chosen the MatLab/Simulink software toolbox for modelling and simulation experiments performed on the static smart solar grid system. The auto-correlation coefficient results obtained from the modelling experiments give an accuracy of 99% with negligible mean square error (MSE), root mean square error (RMSE) and standard deviation. This thesis further explores the design and implementation of a robust real-time online solar photovoltaic monitoring system, establishing a comparative study of two solar photovoltaic tracking systems which provide remote access to the harvested energy data. This research made a landmark innovation in designing and implementing a unique approach for online remote access solar photovoltaic monitoring systems providing updated information of the energy produced by the solar photovoltaic module at the site location. In addressing the challenge of online solar photovoltaic monitoring systems, Darfon online data logger device has been systematically integrated into the design for a comparative study of the two solar photovoltaic tracking systems examined in this thesis. The site location for the comparative study of the solar photovoltaic tracking systems is at the National Kaohsiung University of Applied Sciences, Taiwan, R.O.C. The overall comparative energy output efficiency of the azimuthal-altitude dual-axis over the 450 stationary solar photovoltaic monitoring system as observed at the research location site is about 72% based on the total energy produced, estimated money saved and the amount of CO2 reduction achieved. Similarly, in comparing the total amount of energy produced by the two solar photovoltaic tracking systems, the overall daily generated energy for the month of July shows the effectiveness of the azimuthal-altitude tracking systems over the 450 stationary solar photovoltaic system. It was found that the azimuthal-altitude dual-axis tracking systems were about 68.43% efficient compared to the 450 stationary solar photovoltaic systems. Lastly, the overall comparative hourly energy efficiency of the azimuthal-altitude dual-axis over the 450 stationary solar photovoltaic energy system was found to be 74.2% efficient. Results from this research are quite promising and significant in satisfying the purpose of the research objectives and questions posed in the thesis. The new algorithms introduced in this research and the statistical measures applied to the modelling and simulation of a smart static solar photovoltaic grid system performance outperformed other previous works in reviewed literature. Based on this new implementation design of the online data logging systems for solar photovoltaic monitoring, it is possible for the first time to have online on-site information of the energy produced remotely, fault identification and rectification, maintenance and recovery time deployed as fast as possible. The results presented in this research as Internet of things (IoT) on smart solar grid systems are likely to offer real-life experiences especially both to the existing body of knowledge and the future solar photovoltaic energy industry irrespective of the study site location for the comparative solar photovoltaic tracking systems. While the thesis has contributed to the smart solar photovoltaic grid system, it has also highlighted areas of further research and the need to investigate more on improving the choice and quality design for solar photovoltaic modules. Finally, it has also made recommendations for further research in the minimization of the absolute or relative errors in the quality and design of the smart static solar photovoltaic module.
Resumo:
Sequences of timestamped events are currently being generated across nearly every domain of data analytics, from e-commerce web logging to electronic health records used by doctors and medical researchers. Every day, this data type is reviewed by humans who apply statistical tests, hoping to learn everything they can about how these processes work, why they break, and how they can be improved upon. To further uncover how these processes work the way they do, researchers often compare two groups, or cohorts, of event sequences to find the differences and similarities between outcomes and processes. With temporal event sequence data, this task is complex because of the variety of ways single events and sequences of events can differ between the two cohorts of records: the structure of the event sequences (e.g., event order, co-occurring events, or frequencies of events), the attributes about the events and records (e.g., gender of a patient), or metrics about the timestamps themselves (e.g., duration of an event). Running statistical tests to cover all these cases and determining which results are significant becomes cumbersome. Current visual analytics tools for comparing groups of event sequences emphasize a purely statistical or purely visual approach for comparison. Visual analytics tools leverage humans' ability to easily see patterns and anomalies that they were not expecting, but is limited by uncertainty in findings. Statistical tools emphasize finding significant differences in the data, but often requires researchers have a concrete question and doesn't facilitate more general exploration of the data. Combining visual analytics tools with statistical methods leverages the benefits of both approaches for quicker and easier insight discovery. Integrating statistics into a visualization tool presents many challenges on the frontend (e.g., displaying the results of many different metrics concisely) and in the backend (e.g., scalability challenges with running various metrics on multi-dimensional data at once). I begin by exploring the problem of comparing cohorts of event sequences and understanding the questions that analysts commonly ask in this task. From there, I demonstrate that combining automated statistics with an interactive user interface amplifies the benefits of both types of tools, thereby enabling analysts to conduct quicker and easier data exploration, hypothesis generation, and insight discovery. The direct contributions of this dissertation are: (1) a taxonomy of metrics for comparing cohorts of temporal event sequences, (2) a statistical framework for exploratory data analysis with a method I refer to as high-volume hypothesis testing (HVHT), (3) a family of visualizations and guidelines for interaction techniques that are useful for understanding and parsing the results, and (4) a user study, five long-term case studies, and five short-term case studies which demonstrate the utility and impact of these methods in various domains: four in the medical domain, one in web log analysis, two in education, and one each in social networks, sports analytics, and security. My dissertation contributes an understanding of how cohorts of temporal event sequences are commonly compared and the difficulties associated with applying and parsing the results of these metrics. It also contributes a set of visualizations, algorithms, and design guidelines for balancing automated statistics with user-driven analysis to guide users to significant, distinguishing features between cohorts. This work opens avenues for future research in comparing two or more groups of temporal event sequences, opening traditional machine learning and data mining techniques to user interaction, and extending the principles found in this dissertation to data types beyond temporal event sequences.