首页> 外文学位 >Towards a General Theory of Education-based Inequality and Mobility: Who Wins and Loses under China's Educational Expansion, 1981-2010.
【24h】

Towards a General Theory of Education-based Inequality and Mobility: Who Wins and Loses under China's Educational Expansion, 1981-2010.

机译:建立基于教育的不平等和流动的通论:1981年至2010年,谁在中国的教育扩张中胜负。

获取原文
获取原文并翻译 | 示例

摘要

Sparse linear models pose dual views toward data that are embodied in compressive sensing and sparse coding. Despite mathematical equivalence, compressive sensing and sparse coding are two different classes of application for sparse linear models. Compressive sensing draws recoverable, low-dimensional compressed representations from a blind linear projection on data. Sparse coding enables the discovery of structural patterns underlying data in forced decomposition with a given dictionary of basis vectors. Sparsity is the common constraint that makes exact recovery possible for compressive sensing and allows forced decomposition to unveil meaningful features for sparse coding. In this thesis, I build on compressive sensing and sparse coding to explore the problems for reconstructive and discriminative applications in sensing, wireless networking, and machine learning. Specifically, I aim to develop recovery and feature learning methods robust to complex data transformations and alterations. With a wideband spectrum sensing application for cognitive radios, I empirically demonstrate the resiliency of the proposed sparse recovery technique to linear and nonlinear distortions present in the mix of heavily subsampled RF measurements. I push beyond best-known efficiency for distributed compressive sensing and show feasibility of scaling the spectrum sensing application with constant communications cost. I also focus on learning sparse feature representations for discriminative machine learning tasks. I build a classification pipeline based on both single-layer and multilayer sparse coding trained on various modalities of data including text, image, and time series. To take advantage of possible higher-level construct of features in data, I propose a deep architecture on multilayer sparse coding, namely Deep Sparse-coded Network (DSN). When I train DSN with layer-by-layer dictionary learning followed by the proposed DSN backpropagation algorithm for image and time-series classification, it leads to performance better than deep stacked autoencoder neural network. In addition, I present Nearest Neighbor Sparse Coding (NNSC), an enhancement for sparse coding by imposing the nearest neighbor constraint in the sparse feature domain. Despite inferior reconstructive error, NNSC improves the classification performance of classical sparse coding.
机译:稀疏线性模型对压缩感知和稀疏编码中体现的数据提出双重看法。尽管在数学上是等效的,但压缩感测和稀疏编码是稀疏线性模型的两种不同的应用类别。压缩感测从对数据的盲线性投影中得出可恢复的低维压缩表示。稀疏编码使得能够发现具有给定基向量字典的强制分解中数据基础的结构模式。稀疏性是常见的约束条件,它使得压缩感知的精确恢复成为可能,并允许强制分解揭示稀疏编码的有意义的特征。在本文中,我以压缩感知和稀疏编码为基础,探讨在感知,无线网络和机器学习中重构和区分性应用的问题。具体来说,我旨在开发对复杂数据转换和更改具有鲁棒性的恢复和特征学习方法。在认知无线电的宽带频谱传感应用中,我从经验上证明了所提出的稀疏恢复技术对大量次采样RF测量结果中存在的线性和非线性失真的恢复能力。我超越了众所周知的分布式压缩感测效率,并展示了以恒定的通信成本扩展频谱​​感测应用的可行性。我还将重点研究针对区分性机器学习任务的稀疏特征表示。我建立了基于单层和多层稀疏编码的分类管道,这些编码在各种数据模式下进行了训练,包括文本,图像和时间序列。为了利用数据中可能的更高层次的特征构造,我提出了一种基于多层稀疏编码的深度架构,即深度稀疏编码网络(DSN)。当我使用逐层字典学习训练DSN,然后再提出针对图像和时间序列分类的拟议DSN反向传播算法时,其性能要优于深度堆叠自动编码器神经网络。另外,我提出了最近邻稀疏编码(NNSC),这是通过在稀疏特征域中强加最近邻居约束来进行稀疏编码的增强功能。尽管重建误差较小,但是NNSC改进了经典稀疏编码的分类性能。

著录项

  • 作者

    Guo, Maocan.;

  • 作者单位

    Harvard University.;

  • 授予单位 Harvard University.;
  • 学科 Sociology.
  • 学位 Ph.D.
  • 年度 2015
  • 页码 265 p.
  • 总页数 265
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号