924 resultados para catalog


Relevância:

10.00% 10.00%

Publicador:

Resumo:

 本书的目的是把一般性概括性的理论和实际工程经验很好地结合起来,对工程技术各个系统的自动控制和自动调节理论作一个全面的探讨。它一方面奠定了工程控制论这门技术科学的理论基础,另一方面指出这门新学科今后的几个研究方向。
本书最初是用英文写的。现在的汉文版是在钱学森先生的指导下,翻译英文版并且参照俄文译本略加修改和补充而成。
本书曾荣获中国科学院1956年度一等科学奖金。

目录

汉文版序
原序
第一章 引言
 1.1 常系数线性系统
 1.2 变系数线性系统
 1.3 非线性系统
 1.4 工程近似的问题
第二章 拉氏变换法
 2.1 拉氏变换和反转公式
 2.2 用拉氏变换法解常系数线性微分方
 2.3 拉氏变换的“字典”(拉氏变换表)
 2.4 关于正弦式的驱动函数的讨论
 2.5 关于单位冲量驱动函数的讨论
第三章 输入、输出和传递函数
 3.1 一阶系统
 3.2 传递函数的表示法
 3.3 一阶系统的一些例子
 3.4 二阶系统
 3.5 确定频率特性的方法
 3.6 由多个部件组成的系统
 3.7 超越的传递函数
第四章 反馈伺服系统
 4.1 反馈的概念
 4.2 反馈伺服系统的设计准则
 4.3 乃氏(Nyquist)法
 4.4 艾文思(Evans)法
 4.5 根轨迹的流体力学比拟
 4.6 伯德(Bode)法
 4.7 传递函数的设计
 4.8 多回路伺服系统
第五章 不互相影响的控制
 5.1 单变数系统的控制
 5.2 多变数系统的控制
 5.3 不互相影响的条件
 5.4 反应方程
 5.5 涡轮螺旋桨发动机的控制
 5.6 有补充燃烧的涡轮喷气发动机的控制
第六章 交流伺服系统与振荡控制伺服系统
 6.1 交流系统
 6.2 把直流系统变为交流系统时传递函数的变化方法
 6.3 振荡控制伺服系统
 6.4 继电器的频率特性
 6.5 利用固有振荡的振荡控制伺服系统
 6.6 一般的振荡控制伺服系统
第七章 采样伺服系统
 7.1 一个采样线路的输出
 7.2 施梯必茨?申南(Stibitz?Shannon)理论
 7.3 采样伺服系统的乃氏准则
 7.4 稳态误差
 7.5 F*2(s)的计算
 7.6 连续作用伺服系统与采样伺服系统的比较
 7.7 F2(s)在原点有极点的情形
第八章 有时滞的线性系统
第九章 平稳随机输入下的线性系统
第十章 继电器伺服系统
第十一章 非线性系统
第十二章 变系数线性系统
第十三章 利用摄动理论的控制设计
第十四章 满足指定积分条件的控制设计
第十五章 自动寻求最优运转点的控制系统
第十六章 噪声过滤的设计原理
第十七章 自行镇定和适应环境的系统
第十八章 误差的控制
俄文文献
索引
附录
工程控制论简介
现代化、技术革命和控制论
编后记

Relevância:

10.00% 10.00%

Publicador:

Resumo:

量纲分析是一门非常值得研究和学习的知识,它是探讨科学规律,解决科学和工程的一个有效的工具。熟练掌握量纲分析应当是科学和技术工作者应有的基本训练。
本书内容包括:量纲分析的基本概念;量纲分析在熟知的力学现象中的应用;量纲分析在某些经典的力学问题中的应用以及郑哲敏先生的研究集体近三四十年中在爆炸力学诸多的应用实例等几个部分。 

目录

写在前面
第1章 结论
1.1 量纲分析是分析和研究问题的有力手段和方法
1.2 物理量的度量
1.3 量纲:有量纲量和元量纲量
1.4 基本量和导出量
1.5 单摆
1.6 量纲分析的实质
1.7 量纲分析的简史
第2章 基本原理
2.1 量纲的幂次表示
2.2 II定理
2.3 自变量和基本量的选择
2.4 相似律

写在前面
第1章 结论
1.1 量纲分析是分析和研究问题的有力手段和方法
1.2 物理量的度量
1.3 量纲:有量纲量和元量纲量
1.4 基本量和导出量
1.5 单摆
1.6 量纲分析的实质
1.7 量纲分析的简史
第2章 基本原理
2.1 量纲的幂次表示
2.2 II定理
2.3 自变量和基本量的选择
2.4 相似律
2.5 运用II的定理的注意点
第3章 流体力学问题
3.1 典型流动
3.2 流体力学问题中的相似准数[13]
3.3 其他相似准数
3.4 流体运动的分类
第4章 固体力学问题
4.1 弹性体的应力分析和简单结构的稳定性分析
4.2 弹性体的振动和波动
4.3 弹塑性体的应力分析
4.4 固体的拉伸断裂
第5章 固体中的热传导与热应力
5.1 固体中的热传导
5.2 弹性体内的热应力
第6章 流固耦合问题
6.1 水击
6.2 弹性和轴承
6.3 机翼的颤振
6.4 热交换器的气激振动
第7章 流体弹塑性模型
7.1 流体弹塑性体模型
7.2 化学炸药的爆炸效应问题中的相似参数
7.3 高速冲击问题中的相似参数
第8章 爆炸相似律
8.1 空中爆炸波和水中爆炸波
8.2 爆炸加工
8.3 爆破
第9章 冲击相似律
9.1 杆式穿甲弹
9.2 破甲——聚能射流的形成及其对装甲的侵彻
9.3 碎甲层裂
9.4 超高速冲击
9.5 金属射流与薄板的高速扩张断裂
9.6 煤与瓦斯突出——两相耦合介质动力学现象
第10章 数学模拟规整化
参考文献
主题索引
外国人名索引

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Assateague Island is an offshore bar comprising the south-eastern coast of Maryland and the northeastern coast of Virgina. It is part of the system of discontinuous barrier reefs or bars which occupy most of the Atlantic shoreline from Florida to Massachusetts. These are unstable bars, continuously influenced by storm winds and tides which provide a distinct and rigorous habitat for the vegetation there. General floras of the Delmarva Peninusla do not mention Assateague Island specifically. The objective is to prepare a catalog of the vascular plants of Assateague Island and to describe the communities in which they are found, in the hope it will add to the knowledge of barrier reef vegetation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN] This article investigates the question of the licensing of null arguments in the so-called pro-drop languages. By focusing on the licensing of null subjects in the different types of -T(Z)E nominalizations in Basque, it aims at defining in a precise way the crucial feature that makes pro-drop possible in a clause. The central claim is that what licenses subject-drop is the assignment of structural Case. That is, it is argued that a subject can be null if and only if it is assigned structural Case. Different aspects of T(Z)E nominalizations are also explored, which show that even if these clauses are similar in the surface, they can be syntactically very different and furthermore, that infinitive clauses marked with the same nominalizing morpheme can also have diverging structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traduzido em linguagem e offerecido a Assembléa Geral, Constituinte, e Legislativa do Imperio do Brazil, por R.P.B.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present measurements of the spatial distribution, kinematics, and physical properties of gas in the circumgalactic medium (CGM) of 2.0<z<2.8 UV color-selected galaxies as well as within the 2<z<3 intergalactic medium (IGM). These measurements are derived from Voigt profile decomposition of the full Lyα and Lyβ forest in 15 high-resolution, high signal-to-noise ratio QSO spectra resulting in a catalog of ∼6000 HI absorbers.

Chapter 2 of this thesis focuses on HI surrounding high-z star-forming galaxies drawn from the Keck Baryonic Structure Survey (KBSS). The KBSS is a unique spectroscopic survey of the distant universe designed to explore the details of the connection between galaxies and intergalactic baryons within the same survey volumes. The KBSS combines high-quality background QSO spectroscopy with large densely-sampled galaxy redshift surveys to probe the CGM at scales of ∼50 kpc to a few Mpc. Based on these data, Chapter 2 presents the first quantitative measurements of the distribution, column density, kinematics, and absorber line widths of neutral hydrogen surrounding high-z star-forming galaxies.

Chapter 3 focuses on the thermal properties of the diffuse IGM. This analysis relies on measurements of the ∼6000 absorber line widths to constrain the thermal and turbulent velocities of absorbing "clouds." A positive correlation between the column density of HI and the minimum line width is recovered and implies a temperature-density relation within the low-density IGM for which higher-density regions are hotter, as is predicted by simple theoretical arguments.

Chapter 4 presents new measurements of the opacity of the IGM and CGM to hydrogen-ionizing photons. The chapter begins with a revised measurement of the HI column density distribution based on this new absorption line catalog that, due to the inclusion of high-order Lyman lines, provides the first statistically robust measurement of the frequency of absorbers with HI column densities 14 ≲ log(NHI/cm-2) ≲ 17.2. Also presented are the first measurements of the column density distribution of HI within the CGM (50 <d < 300 pkpc) of high-z galaxies. These distributions are used to calculate the total opacity of the IGM and IGM+CGM and to revise previous measurements of the mean free path of hydrogen-ionizing photons within the IGM. This chapter also considers the effect of the surrounding CGM on the transmission of ionizing photons out of the sites of active star-formation and into the IGM.

This thesis concludes with a brief discussion of work in progress focused on understanding the distribution of metals within the CGM of KBSS galaxies. Appendix B discusses my contributions to the MOSFIRE instrumentation project.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cdc48/p97 is an essential, highly abundant hexameric member of the AAA (ATPase associated with various cellular activities) family. It has been linked to a variety of processes throughout the cell but it is best known for its role in the ubiquitin proteasome pathway. In this system it is believed that Cdc48 behaves as a segregase, transducing the chemical energy of ATP hydrolysis into mechanical force to separate ubiquitin-conjugated proteins from their tightly-bound partners.

Current models posit that Cdc48 is linked to its substrates through a variety of adaptor proteins, including a family of seven proteins (13 in humans) that contain a Cdc48-binding UBX domain. As such, due to the complexity of the network of adaptor proteins for which it serves as the hub, Cdc48/p97 has the potential to exert a profound influence on the ubiquitin proteasome pathway. However, the number of known substrates of Cdc48/p97 remains relatively small, and smaller still is the number of substrates that have been linked to a specific UBX domain protein. As such, the goal of this dissertation research has been to discover new substrates and better understand the functions of the Cdc48 network. With this objective in mind, we established a proteomic screen to assemble a catalog of candidate substrate/targets of the Ubx adaptor system.

Here we describe the implementation and optimization of a cutting-edge quantitative mass spectrometry method to measure relative changes in the Saccharomyces cerevisiae proteome. Utilizing this technology, and in order to better understand the breadth of function of Cdc48 and its adaptors, we then performed a global screen to identify accumulating ubiquitin conjugates in cdc48-3 and ubxΔ mutants. In this screen different ubx mutants exhibited reproducible patterns of conjugate accumulation that differed greatly from each other, pointing to various unexpected functional specializations of the individual Ubx proteins.

As validation of our mass spectrometry findings, we then examined in detail the endoplasmic-reticulum bound transcription factor Spt23, which we identified as a putative Ubx2 substrate. In these studies ubx2Δ cells were deficient in processing of Spt23 to its active p90 form, and in localizing p90 to the nucleus. Additionally, consistent with reduced processing of Spt23, ubx2Δ cells demonstrated a defect in expression of their target gene OLE1, a fatty acid desaturase. Overall, this work demonstrates the power of proteomics as a tool to identify new targets of various pathways and reveals Ubx2 as a key regulator lipid membrane biosynthesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Galaxy clusters are the largest gravitationally bound objects in the observable universe, and they are formed from the largest perturbations of the primordial matter power spectrum. During initial cluster collapse, matter is accelerated to supersonic velocities, and the baryonic component is heated as it passes through accretion shocks. This process stabilizes when the pressure of the bound matter prevents further gravitational collapse. Galaxy clusters are useful cosmological probes, because their formation progressively freezes out at the epoch when dark energy begins to dominate the expansion and energy density of the universe. A diverse set of observables, from radio through X-ray wavelengths, are sourced from galaxy clusters, and this is useful for self-calibration. The distributions of these observables trace a cluster's dark matter halo, which represents more than 80% of the cluster's gravitational potential. One such observable is the Sunyaev-Zel'dovich effect (SZE), which results when the ionized intercluster medium blueshifts the cosmic microwave background via Compton scattering. Great technical advances in the last several decades have made regular observation of the SZE possible. Resolved SZE science, such as is explored in this analysis, has benefitted from the construction of large-format camera arrays consisting of highly sensitive millimeter-wave detectors, such as Bolocam. Bolocam is a submillimeter camera, sensitive to 140 GHz and 268 GHz radiation, located at one of the best observing sites in the world: the Caltech Submillimeter Observatory on Mauna Kea in Hawaii. Bolocam fielded 144 of the original spider web NTD bolometers used in an entire generation of ground-based, balloon-borne, and satellite-borne millimeter wave instrumention. Over approximately six years, our group at Caltech has developed a mature galaxy cluster observational program with Bolocam. This thesis describes the construction of the instrument's full cluster catalog: BOXSZ. Using this catalog, I have scaled the Bolocam SZE measurements with X-ray mass approximations in an effort to characterize the SZE signal as a viable mass probe for cosmology. This work has confirmed the SZE to be a low-scatter tracer of cluster mass. The analysis has also revealed how sensitive the SZE-mass scaling is to small biases in the adopted mass approximation. Future Bolocam analysis efforts are set on resolving these discrepancies by approximating cluster mass jointly with different observational probes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Uncovering the demographics of extrasolar planets is crucial to understanding the processes of their formation and evolution. In this thesis, we present four studies that contribute to this end, three of which relate to NASA's Kepler mission, which has revolutionized the field of exoplanets in the last few years.

In the pre-Kepler study, we investigate a sample of exoplanet spin-orbit measurements---measurements of the inclination of a planet's orbit relative to the spin axis of its host star---to determine whether a dominant planet migration channel can be identified, and at what confidence. Applying methods of Bayesian model comparison to distinguish between the predictions of several different migration models, we find that the data strongly favor a two-mode migration scenario combining planet-planet scattering and disk migration over a single-mode Kozai migration scenario. While we test only the predictions of particular Kozai and scattering migration models in this work, these methods may be used to test the predictions of any other spin-orbit misaligning mechanism.

We then present two studies addressing astrophysical false positives in Kepler data. The Kepler mission has identified thousands of transiting planet candidates, and only relatively few have yet been dynamically confirmed as bona fide planets, with only a handful more even conceivably amenable to future dynamical confirmation. As a result, the ability to draw detailed conclusions about the diversity of exoplanet systems from Kepler detections relies critically on understanding the probability that any individual candidate might be a false positive. We show that a typical a priori false positive probability for a well-vetted Kepler candidate is only about 5-10%, enabling confidence in demographic studies that treat candidates as true planets. We also present a detailed procedure that can be used to securely and efficiently validate any individual transit candidate using detailed information of the signal's shape as well as follow-up observations, if available.

Finally, we calculate an empirical, non-parametric estimate of the shape of the radius distribution of small planets with periods less than 90 days orbiting cool (less than 4000K) dwarf stars in the Kepler catalog. This effort reveals several notable features of the distribution, in particular a maximum in the radius function around 1-1.25 Earth radii and a steep drop-off in the distribution larger than 2 Earth radii. Even more importantly, the methods presented in this work can be applied to a broader subsample of Kepler targets to understand how the radius function of planets changes across different types of host stars.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Part 1 of this thesis is about the 24 November, 1987, Superstition Hills earthquakes. The Superstition Hills earthquakes occurred in the western Imperial Valley in southern California. The earthquakes took place on a conjugate fault system consisting of the northwest-striking right-lateral Superstition Hills fault and a previously unknown Elmore Ranch fault, a northeast-striking left-lateral structure defined by surface rupture and a lineation of hypocenters. The earthquake sequence consisted of foreshocks, the M_s 6.2 first main shock, and aftershocks on the Elmore Ranch fault followed by the M_s 6.6 second main shock and aftershocks on the Superstition Hills fault. There was dramatic surface rupture along the Superstition Hills fault in three segments: the northern segment, the southern segment, and the Wienert fault.

In Chapter 2, M_L≥4.0 earthquakes from 1945 to 1971 that have Caltech catalog locations near the 1987 sequence are relocated. It is found that none of the relocated earthquakes occur on the southern segment of the Superstition Hills fault and many occur at the intersection of the Superstition Hills and Elmore Ranch faults. Also, some other northeast-striking faults may have been active during that time.

Chapter 3 discusses the Superstition Hills earthquake sequence using data from the Caltech-U.S.G.S. southern California seismic array. The earthquakes are relocated and their distribution correlated to the type and arrangement of the basement rocks. The larger earthquakes occur only where continental crystalline basement rocks are present. The northern segment of the Superstition Hills fault has more aftershocks than the southern segment.

An inversion of long period teleseismic data of the second mainshock of the 1987 sequence, along the Superstition Hills fault, is done in Chapter 4. Most of the long period seismic energy seen teleseismically is radiated from the southern segment of the Superstition Hills fault. The fault dip is near vertical along the northern segment of the fault and steeply southwest dipping along the southern segment of the fault.

Chapter 5 is a field study of slip and afterslip measurements made along the Superstition Hills fault following the second mainshock. Slip and afterslip measurements were started only two hours after the earthquake. In some locations, afterslip more than doubled the coseismic slip. The northern and southern segments of the Superstition Hills fault differ in the proportion of coseismic and postseismic slip to the total slip.

The northern segment of the Superstition Hills fault had more aftershocks, more historic earthquakes, released less teleseismic energy, and had a smaller proportion of afterslip to total slip than the southern segment. The boundary between the two segments lies at a step in the basement that separates a deeper metasedimentary basement to the south from a shallower crystalline basement to the north.

Part 2 of the thesis deals with the three-dimensional velocity structure of southern California. In Chapter 7, an a priori three-dimensional crustal velocity model is constructed by partitioning southern California into geologic provinces, with each province having a consistent one-dimensional velocity structure. The one-dimensional velocity structures of each region were then assembled into a three-dimensional model. The three-dimension model was calibrated by forward modeling of explosion travel times.

In Chapter 8, the three-dimensional velocity model is used to locate earthquakes. For about 1000 earthquakes relocated in the Los Angeles basin, the three-dimensional model has a variance of the the travel time residuals 47 per cent less than the catalog locations found using a standard one-dimensional velocity model. Other than the 1987 Whittier earthquake sequence, little correspondence is seen between these earthquake locations and elements of a recent structural cross section of the Los Angeles basin. The Whittier sequence involved rupture of a north dipping thrust fault bounded on at least one side by a strike-slip fault. The 1988 Pasadena earthquake was deep left-lateral event on the Raymond fault. The 1989 Montebello earthquake was a thrust event on a structure similar to that on which the Whittier earthquake occurred. The 1989 Malibu earthquake was a thrust or oblique slip event adjacent to the 1979 Malibu earthquake.

At least two of the largest recent thrust earthquakes (San Fernando and Whittier) in the Los Angeles basin have had the extent of their thrust plane ruptures limited by strike-slip faults. This suggests that the buried thrust faults underlying the Los Angeles basin are segmented by strike-slip faults.

Earthquake and explosion travel times are inverted for the three-dimensional velocity structure of southern California in Chapter 9. The inversion reduced the variance of the travel time residuals by 47 per cent compared to the starting model, a reparameterized version of the forward model of Chapter 7. The Los Angeles basin is well resolved, with seismically slow sediments atop a crust of granitic velocities. Moho depth is between 26 and 32 km.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multi-finger caging offers a rigorous and robust approach to robot grasping. This thesis provides several novel algorithms for caging polygons and polyhedra in two and three dimensions. Caging refers to a robotic grasp that does not necessarily immobilize an object, but prevents it from escaping to infinity. The first algorithm considers caging a polygon in two dimensions using two point fingers. The second algorithm extends the first to three dimensions. The third algorithm considers caging a convex polygon in two dimensions using three point fingers, and considers robustness of this cage to variations in the relative positions of the fingers.

This thesis describes an algorithm for finding all two-finger cage formations of planar polygonal objects based on a contact-space formulation. It shows that two-finger cages have several useful properties in contact space. First, the critical points of the cage representation in the hand’s configuration space appear as critical points of the inter-finger distance function in contact space. Second, these critical points can be graphically characterized directly on the object’s boundary. Third, contact space admits a natural rectangular decomposition such that all critical points lie on the rectangle boundaries, and the sublevel sets of contact space and free space are topologically equivalent. These properties lead to a caging graph that can be readily constructed in contact space. Starting from a desired immobilizing grasp of a polygonal object, the caging graph is searched for the minimal, intermediate, and maximal caging regions surrounding the immobilizing grasp. An example constructed from real-world data illustrates and validates the method.

A second algorithm is developed for finding caging formations of a 3D polyhedron for two point fingers using a lower dimensional contact-space formulation. Results from the two-dimensional algorithm are extended to three dimension. Critical points of the inter-finger distance function are shown to be identical to the critical points of the cage. A decomposition of contact space into 4D regions having useful properties is demonstrated. A geometric analysis of the critical points of the inter-finger distance function results in a catalog of grasps in which the cages change topology, leading to a simple test to classify critical points. With these properties established, the search algorithm from the two-dimensional case may be applied to the three-dimensional problem. An implemented example demonstrates the method.

This thesis also presents a study of cages of convex polygonal objects using three point fingers. It considers a three-parameter model of the relative position of the fingers, which gives complete generality for three point fingers in the plane. It analyzes robustness of caging grasps to variations in the relative position of the fingers without breaking the cage. Using a simple decomposition of free space around the polygon, we present an algorithm which gives all caging placements of the fingers and a characterization of the robustness of these cages.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The need to estimate percentages and/or numbers occurs frequently during practical research work; accurate but rapid estimates can be useful when planning research programmes. Charts are provided that may be used as a visual aid to estimating numbers of animals/plants in a specific situation, for example, the number of fish fry in a subsample from a hatchery tank, or the percentage composition of a sample such as the percentage algal cover in a pond.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As Tecnologias da Informação e comunicação (TICS) e a Internet vêm proporcionando novas formas de interação e comunicação entre os indivíduos no meio digital. Dessa forma, surgem novos gêneros textuais e releituras de gêneros já existentes do meio físico, mas que se adaptam às peculiaridades do meio digital. Os gêneros em meio digital, como são recentes, não possuem muitas investigações sobre ele. Por isso, optamos por pesquisar esta área. Dentre a diversidade de gêneros do ambiente virtual, destacamos em nossa pesquisa as narrativas hiperficcionais ou hiperficções. A hiperficção possui características de gêneros do ambiente físico, mas se concretiza com características próprias de seu ambiente, transfigurando-se, assim, em um possível gênero digital. A hiperficção pode ser colaborativa ou exploratória e, normalmente, está alocada em sites de literatura ou projetos em literatura digital. Possuem como característica principal a hipertextualidade e o uso do link, da imagem e do som, configurando-se em um texto multimodal (ALONSO, 2011; SANTOS, 2010). Esta pesquisa concentra-se sobre as hiperficções exploratórias, uma vez que não há um número substancial de pesquisas sobre as mesmas, como acontece com as hiperficções colaborativas. Há, na academia, discussões sobre o ato de ler, se este exige (ou não) posicionamentos diversificados para textos digitais em comparação com os impressos. Estudos recentes não confirmam uma diferença significativa no ato de ler textos digitais, mas concordam com a necessidade de conhecimento e domínio do suporte e do gênero do qual o texto faz parte (COSCARELI, 2012). Partindo deste pressuposto, considerando a leitura em meio digital, e propondo um diálogo entre linguística e literatura, o presente trabalho verifica se a hiperficção se constitui enquanto gênero textual em ambiente digital (MARCUSCHI, 2008) e se é um gênero exclusivo deste meio. Assim, nosso objetivo geral é mapear a hiperficção e discutir suas características, considerando os pressupostos teóricos dos gêneros textuais digitais. Como objetivos específicos, pretendemos elaborar uma listagem de hiperficções, catalogá-las de acordo com seus elementos e descrevê-las a partir de suas características emergentes. Além disso, ponderamos em que medida a leitura do gênero textual em ambiente digital hiperficção exige um posicionamento diferenciado do leitor, principalmente aos gêneros textuais impressos. Trata-se de um estudo documental exploratório e descritivo de base qualitativa com amostra de hiperficções em língua inglesa, portuguesa e espanhola, coletadas em sites na internet, no período de junho de 2012 a maio de 2013