2017年9月28日星期四

ubuntu 终端操作相关

改变文件夹相关权限: sudo chmod 0775 * [*代表文件夹]

进程相关

相关知识点
1.僵尸进程和孤儿进程:
http://www.cnblogs.com/limt/p/4199255.html

相关操作
1. ps -ef  显示全部进程和信息

2017年9月27日星期三

git常用操作

git操作分为两大类,一种是处理文件的操作;另一种是分支操作。下面总结一下git常用操作:

一.文件处理操作
0.对比前后差别:           git diff    通常改完代码后需要查看修改是否正确
1.添加文件到缓存区:    git add           [git add .默认添加所有文件到缓存区]
2.添加文件到本地仓库: git commit -m "[修改×××]"
3.提交文件到远程仓库: git push origin [本地分支名]:[远程分支名]    如果名称相同可以省略

上述是正常流程下的提交操作,但是如果在提交中操作,或者添加或提交了本来不应该提交的代码,可以进行一下操作:
 4.撤销add的文件:        git reset HEAD .  [撤销添加的所有文件]
                                      git reset HEAD -filename [撤销指定文件]
 5.撤销commit的文件:  git log  [查看commit的日志]
                                      git reset --hard commit_id [撤销commit_id次的commit]
 6.撤销push的文件:       git reset --hard <版本号> [如果不加版本号则回退到之前的一次]

 7. 下拉远程分支上的代码 git pull origin [远程分支名]

二.分支操作
1.创建本地分支    git checkout -b [分支名] [创建一个空分支]
                           git checkout -b [分支名] origin/[远程分支名] [拉远程分支下的代码]
2.创建远程分支    git push origin [本地分支名]:[远程分支名]

3.删除本地分支    git branch -D [本地分支名]
4.删除远程分支    git push origin :[要删除的远程分支名]    [推空白的代码上去,就是删除]

5.切换分支           git checkout [分支名]

6.查看远程分支    git branch -r

7.查看某一历史分支    git log 查询分支的提交编码,如 123456...89
                                  git checkout 123456..89 [转到具体分支]

后续有的话在补充

2017年9月25日星期一

virtualenv虚拟环境及zmq安装

特别6的虚拟环境安装教程:
http://snailvfx.github.io/2016/05/11/virtualenv/


zmq安装教程:
https://tuananh.org/2015/06/16/how-to-install-zeromq-on-ubuntu/

zmq教程
https://www.zhihu.com/question/28648575

pip下镜像源配置

参考http://www.jianshu.com/p/785bb1f4700d

临时使用:
pip install pythonModuleName -i https://pypi.douban.com/simple

修改~/.pip/pip.conf:
[global]
index-url = [https://pypi.douban.com/simple你想使用的镜像]

镜像详细参考:
http://blog.csdn.net/d5224/article/details/52025897

安装openfst,pyfst

首先安装openfst,如果你想继续安装pyfst的话,你需要安装openfst1.3.3的版本,因为pyfst始终不跟新,只支持到openfst1.3.3
打开openfst主页,下载1.3.3的版本http://www.openfst.org/twiki/bin/view/FST/FstDownload

解压文件,运行
 ./configure
make
sudo make install
安装成功

 通过这里测试是否成功:http://blog.csdn.net/chinabing/article/details/50724575
如果编译失败,通过https://github.com/vchahun/fast_umorph/issues/1来修改interval-set.h文件

openfst需要c++11,安装链接如下:
http://blog.csdn.net/lisonglisonglisong/article/details/21947255

在~/.bashrc中添加:
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/lib
到此,openfst差不多安装好了

接下来安装pyfst
sudo CFLAGS="-std=c++0x" pip install pyfst -i https://pypi.douban.com/simple
值得注意的是,镜像一定要用doubian的,因为pyfst一直没有更新,很多镜像已经没有pyfst的包了。
如果你看的时候,发现用豆瓣的镜像也会出现问题,请试用如下链接的镜像:
http://blog.csdn.net/d5224/article/details/52025897


github clone指定版本的文件

先找到git上指定的仓库
点击release,查看指定版本
用git clone [**.git]  -b [v*版本号]来下载指定版本

2017年9月13日星期三

文件读取指针偏移 seek()

文件读取时改变文件读取指针的位置
seek(len_of_character,0) 0表示从文件开头开始,len_of_character表示需要移动的字节数

当0的位置变为1,表示相对于当前位置,目前感觉并不好用

2017年9月12日星期二

epoch,batch,num_steps

一个epoch通常表示在训练集上的一次迭代。举个列子,如果你有20000张图像并且batch_size=100,那么这个epoch包含20000/100 = 200个steps.

tensorflow/models/ptb

ptb的代码可以详见gitlab上的tensorflow/models/tutorials下,本文只详解他的数据前处理和模型部分。

1.运行
首先说一下他的运行 ,下载数据集:http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz
解压出ptb.test/train/valid.txt文件,
运行命令行 :python ptb_word_lm.py --data_path=【文件存放目录】 --model small即可运行成功
2.数据前处理部分
读代码首先要找到主函数,在ptb_word_lm.py文件中,我们看到main函数


def main(_):
  if not FLAGS.data_path:
    raise ValueError("Must set --data_path to PTB data directory")
  gpus = [
      x.name for x in device_lib.list_local_devices() if x.device_type == "GPU"  ]
  if FLAGS.num_gpus > len(gpus):
    raise ValueError(
        "Your machine has only %d gpus "        "which is less than the requested --num_gpus=%d."        % (len(gpus), FLAGS.num_gpus))

  raw_data = reader.ptb_raw_data(FLAGS.data_path)
  train_data, valid_data, test_data, _ = raw_data
  
显然,与前处理相关的只有ptb_raw_data()函数,在reader文件中,找到相关的四个函数:

import tensorflow as tf


def _read_words(filename):
  with tf.gfile.GFile(filename, "r") as f:
    return f.read().decode("utf-8").replace("\n", "<eos>").split()


def _build_vocab(filename):
  data = _read_words(filename)
  counter = collections.Counter(data)
  count_pairs = sorted(counter.items(), key=lambda x: (-x[1], x[0]))
  words, _ = list(zip(*count_pairs))
  word_to_id = dict(zip(words, range(len(words))))
  return word_to_id


def _file_to_word_ids(filename, word_to_id):
  data = _read_words(filename)
  return [word_to_id[word] for word in data if word in word_to_id]


def ptb_raw_data(data_path=None):

  train_path = os.path.join(data_path, "ptb.train.txt")
  valid_path = os.path.join(data_path, "ptb.valid.txt")
  test_path = os.path.join(data_path, "ptb.test.txt")

  word_to_id = _build_vocab(train_path)
  train_data = _file_to_word_ids(train_path, word_to_id)
  valid_data = _file_to_word_ids(valid_path, word_to_id)
  test_data = _file_to_word_ids(test_path, word_to_id)
  vocabulary = len(word_to_id)
  return train_data, valid_data, test_data, vocabulary
_build_vocab函数首先读入文件中所有数据(如:the,a,...),然后以每个单词作为键值,对应给其一个整形数值('the':32021,'a':323),count_pairs将其转化为对应的元组('the',32021),('a',323),建立词表
_file_to_word_ids函数讲文件转化为此表中词所对应的整形数据
返回对应的训练集,验证集和测试集对应的整形数据文件和词表对应长度。

with tf.Graph().as_default():
  initializer = tf.random_uniform_initializer(-config.init_scale,                                              config.init_scale)

  with tf.name_scope("Train"):
    train_input = PTBInput(config=config, data=train_data, name="TrainInput")
    with tf.variable_scope("Model", reuse=None, initializer=initializer):
      m = PTBModel(is_training=True, config=config, input_=train_input)
    tf.summary.scalar("Training Loss", m.cost)
    tf.summary.scalar("Learning Rate", m.lr)
现在跳回ptb_word_lm,在主函数中建立模型,模型的输入函数为PTBInput(),故转去其定义查看
def ptb_producer(raw_data, batch_size, num_steps, name=None):
  """Iterate on the raw PTB data.
  This chunks up raw_data into batches of examples and returns Tensors that  are drawn from these batches.
  Args:    raw_data: one of the raw data outputs from ptb_raw_data.    batch_size: int, the batch size.    num_steps: int, the number of unrolls.    name: the name of this operation (optional).
  Returns:    A pair of Tensors, each shaped [batch_size, num_steps]. The second element    of the tuple is the same data time-shifted to the right by one.
  Raises:    tf.errors.InvalidArgumentError: if batch_size or num_steps are too high.  """  with tf.name_scope(name, "PTBProducer", [raw_data, batch_size, num_steps]):
    raw_data = tf.convert_to_tensor(raw_data, name="raw_data", dtype=tf.int32)

    data_len = tf.size(raw_data)
    batch_len = data_len // batch_size
    data = tf.reshape(raw_data[0: batch_size * batch_len],                      [batch_size, batch_len])

    epoch_size = (batch_len - 1) // num_steps
    assertion = tf.assert_positive(
        epoch_size,        message="epoch_size == 0, decrease batch_size or num_steps")
    with tf.control_dependencies([assertion]):
      epoch_size = tf.identity(epoch_size, name="epoch_size")

    i = tf.train.range_input_producer(epoch_size, shuffle=False).dequeue()
    x = tf.strided_slice(data, [0, i * num_steps],                         [batch_size, (i + 1) * num_steps])
    x.set_shape([batch_size, num_steps])
    y = tf.strided_slice(data, [0, i * num_steps + 1],                         [batch_size, (i + 1) * num_steps + 1])
    y.set_shape([batch_size, num_steps])
    return x, y
可以很容易看出,其是每次读出一个batch*num_steos的数据。

3.模型部分
当我们看完数据的前处理部分,那么重点就来了,那就是数据的model部分
class PTBModel(object):
  """The PTB model."""
  def __init__(self, is_training, config, input_):
    self._is_training = is_training
    self._input = input_
    self._rnn_params = None    self._cell = None    self.batch_size = input_.batch_size
    self.num_steps = input_.num_steps
    size = config.hidden_size
    vocab_size = config.vocab_size

    with tf.device("/cpu:0"):
      embedding = tf.get_variable(
          "embedding", [vocab_size, size], dtype=data_type())
      inputs = tf.nn.embedding_lookup(embedding, input_.input_data)
    if is_training and config.keep_prob < 1:
      inputs = tf.nn.dropout(inputs, config.keep_prob)

    output, state = self._build_rnn_graph(inputs, config, is_training)

    softmax_w = tf.get_variable(
        "softmax_w", [size, vocab_size], dtype=data_type())
    softmax_b = tf.get_variable("softmax_b", [vocab_size], dtype=data_type())
    logits = tf.nn.xw_plus_b(output, softmax_w, softmax_b)
     # Reshape logits to be a 3-D tensor for sequence loss    logits = tf.reshape(logits, [self.batch_size, self.num_steps, vocab_size])

    # Use the contrib sequence loss and average over the batches    loss = tf.contrib.seq2seq.sequence_loss(
        logits,        input_.targets,        tf.ones([self.batch_size, self.num_steps], dtype=data_type()),        average_across_timesteps=False,        average_across_batch=True)

    # Update the cost    self._cost = tf.reduce_sum(loss)
    self._final_state = state

    if not is_training:
      return
    self._lr = tf.Variable(0.0, trainable=False)
    tvars = tf.trainable_variables()
    grads, _ = tf.clip_by_global_norm(tf.gradients(self._cost, tvars),                                      config.max_grad_norm)
    optimizer = tf.train.GradientDescentOptimizer(self._lr)
    self._train_op = optimizer.apply_gradients(
        zip(grads, tvars),        global_step=tf.contrib.framework.get_or_create_global_step())

    self._new_lr = tf.placeholder(
        tf.float32, shape=[], name="new_learning_rate")
    self._lr_update = tf.assign(self._lr, self._new_lr)

  def _build_rnn_graph(self, inputs, config, is_training):
    if config.rnn_mode == CUDNN:
      return self._build_rnn_graph_cudnn(inputs, config, is_training)
    else:
      return self._build_rnn_graph_lstm(inputs, config, is_training)

  def _build_rnn_graph_cudnn(self, inputs, config, is_training):
    """Build the inference graph using CUDNN cell."""    inputs = tf.transpose(inputs, [1, 0, 2])
    self._cell = tf.contrib.cudnn_rnn.CudnnLSTM(
        num_layers=config.num_layers,        num_units=config.hidden_size,        input_size=config.hidden_size,        dropout=1 - config.keep_prob if is_training else 0)
    params_size_t = self._cell.params_size()
    self._rnn_params = tf.get_variable(
        "lstm_params",        initializer=tf.random_uniform(
            [params_size_t], -config.init_scale, config.init_scale),        validate_shape=False)
    c = tf.zeros([config.num_layers, self.batch_size, config.hidden_size],                 tf.float32)
    h = tf.zeros([config.num_layers, self.batch_size, config.hidden_size],                 tf.float32)
    self._initial_state = (tf.contrib.rnn.LSTMStateTuple(h=h, c=c),)
    outputs, h, c = self._cell(inputs, h, c, self._rnn_params, is_training)
    outputs = tf.transpose(outputs, [1, 0, 2])
    outputs = tf.reshape(outputs, [-1, config.hidden_size])
    return outputs, (tf.contrib.rnn.LSTMStateTuple(h=h, c=c),)

  def _get_lstm_cell(self, config, is_training):
    if config.rnn_mode == BASIC:
      return tf.contrib.rnn.BasicLSTMCell(
          config.hidden_size, forget_bias=0.0, state_is_tuple=True,          reuse=not is_training)
    if config.rnn_mode == BLOCK:
      return tf.contrib.rnn.LSTMBlockCell(
          config.hidden_size, forget_bias=0.0)
    raise ValueError("rnn_mode %s not supported" % config.rnn_mode)

  def _build_rnn_graph_lstm(self, inputs, config, is_training):
    """Build the inference graph using canonical LSTM cells."""    # Slightly better results can be obtained with forget gate biases    # initialized to 1 but the hyperparameters of the model would need to be    # different than reported in the paper.    cell = self._get_lstm_cell(config, is_training)
    if is_training and config.keep_prob < 1:
      cell = tf.contrib.rnn.DropoutWrapper(
          cell, output_keep_prob=config.keep_prob)

    cell = tf.contrib.rnn.MultiRNNCell(
        [cell for _ in range(config.num_layers)], state_is_tuple=True)

    self._initial_state = cell.zero_state(config.batch_size, data_type())
    state = self._initial_state
    # Simplified version of tensorflow_models/tutorials/rnn/rnn.py's rnn().    # This builds an unrolled LSTM for tutorial purposes only.    # In general, use the rnn() or state_saving_rnn() from rnn.py.    #    # The alternative version of the code below is:    #    # inputs = tf.unstack(inputs, num=num_steps, axis=1)    # outputs, state = tf.contrib.rnn.static_rnn(cell, inputs,    #                            initial_state=self._initial_state)    outputs = []
    with tf.variable_scope("RNN"):
      for time_step in range(self.num_steps):
        if time_step > 0: tf.get_variable_scope().reuse_variables()
        (cell_output, state) = cell(inputs[:, time_step, :], state)
        outputs.append(cell_output)
    output = tf.reshape(tf.concat(outputs, 1), [-1, config.hidden_size])
    return output, state

  def assign_lr(self, session, lr_value):
    session.run(self._lr_update, feed_dict={self._new_lr: lr_value})

  def export_ops(self, name):
    """Exports ops to collections."""    self._name = name
    ops = {util.with_prefix(self._name, "cost"): self._cost}
    if self._is_training:
      ops.update(lr=self._lr, new_lr=self._new_lr, lr_update=self._lr_update)
      if self._rnn_params:
        ops.update(rnn_params=self._rnn_params)
    for name, op in ops.iteritems():
      tf.add_to_collection(name, op)
    self._initial_state_name = util.with_prefix(self._name, "initial")
    self._final_state_name = util.with_prefix(self._name, "final")
    util.export_state_tuples(self._initial_state, self._initial_state_name)
    util.export_state_tuples(self._final_state, self._final_state_name)

  def import_ops(self):
    """Imports ops from collections."""    if self._is_training:
      self._train_op = tf.get_collection_ref("train_op")[0]
      self._lr = tf.get_collection_ref("lr")[0]
      self._new_lr = tf.get_collection_ref("new_lr")[0]
      self._lr_update = tf.get_collection_ref("lr_update")[0]
      rnn_params = tf.get_collection_ref("rnn_params")
      if self._cell and rnn_params:
        params_saveable = tf.contrib.cudnn_rnn.RNNParamsSaveable(
            self._cell,            self._cell.params_to_canonical,            self._cell.canonical_to_params,            rnn_params,            base_variable_scope="Model/RNN")
        tf.add_to_collection(tf.GraphKeys.SAVEABLE_OBJECTS, params_saveable)
    self._cost = tf.get_collection_ref(util.with_prefix(self._name, "cost"))[0]
    num_replicas = FLAGS.num_gpus if self._name == "Train" else 1    self._initial_state = util.import_state_tuples(
        self._initial_state, self._initial_state_name, num_replicas)
    self._final_state = util.import_state_tuples(
        self._final_state, self._final_state_name, num_replicas)

daixu

leetcode 17

17.   Letter Combinations of a Phone Number Medium Given a string containing digits from   2-9   inclusive, return all possible l...