python-Hbase
2023-05-06 16:16:31 5 举报
AI智能生成
简易搭建与问题记录
作者其他创作
大纲/内容
HDFS
虚拟机概览
系统
redhat
硬盘
150GB
内存
10GB
处理器
4
ip
Hadoop0:192.168.56.108
Hadoop1:192.168.56.109
Hadoop2:192.168.56.110
Hadoop1:192.168.56.109
Hadoop2:192.168.56.110
1.软件版本
hadoop3.3
/opt/module/hadoop-3.3
jdk1.8.0-201
/opt/module/jdk1.8
2.系统变量配置
vim /etc/hosts
192.168.56.108 hadoop0
192.168.56.109 hadoop1
192.168.56.110 hadoop2
192.168.56.109 hadoop1
192.168.56.110 hadoop2
三台一致,source生效
vim /etc/profile
# export JAVA_HOME=/opt/module/jdk1.8
# export JAVA_CLASSPATH=.:JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools/jar
# export PATH=$PATH:$JAVA_HOME/bin
# export HADOOP_HOME=/opt/module/hadoop-3.3
# export PATH=$PATH:$HADOOP_HOME/bin
# export PATH=$PATH:$HADOOP_HOME/sbin
# export SQOOP_SERVER_EXTRA_LIB=$SQOOP_HOME/extLib
# export HADOOP_COMMON_HOME=/opt/module/hadoop-3.3/share/hadoop/common
# export HADOOP_HDFS_HOME=/opt/module/hadoop-3.3/share/hadoop/hdfs
# export HADOOP_MAPRED_HOME=/opt/module/hadoop-3.3/share/hadoop/mapreduce
# export HADOOP_YARN_HOME=/opt/module/hadoop-3.3/share/hadoop/yarn
# export JAVA_CLASSPATH=.:JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools/jar
# export PATH=$PATH:$JAVA_HOME/bin
# export HADOOP_HOME=/opt/module/hadoop-3.3
# export PATH=$PATH:$HADOOP_HOME/bin
# export PATH=$PATH:$HADOOP_HOME/sbin
# export SQOOP_SERVER_EXTRA_LIB=$SQOOP_HOME/extLib
# export HADOOP_COMMON_HOME=/opt/module/hadoop-3.3/share/hadoop/common
# export HADOOP_HDFS_HOME=/opt/module/hadoop-3.3/share/hadoop/hdfs
# export HADOOP_MAPRED_HOME=/opt/module/hadoop-3.3/share/hadoop/mapreduce
# export HADOOP_YARN_HOME=/opt/module/hadoop-3.3/share/hadoop/yarn
三台一致,source生效
3.ssh免密登陆
1.防火墙关闭
service iptables status
service iptables stop
chkconfig iptables off
-- 关闭SELINUX
# vim /etc/selinux/config
-- 注释掉
#SELINUX=enforcing
#SELINUXTYPE=targeted
-- 添加
SELINUX=disabled
# vim /etc/selinux/config
-- 注释掉
#SELINUX=enforcing
#SELINUXTYPE=targeted
-- 添加
SELINUX=disabled
2.SSH免密登陆
二、免密码登录本机
下面以配置hadoop-master本机无密码登录为例进行讲解,用户需参照下面步骤完成h-salve1~3三台子节点机器的本机无密码登录;
1)生产秘钥
ssh-keygen -t rsa
2)将公钥追加到”authorized_keys”文件
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
3)赋予权限
chmod 600 .ssh/authorized_keys
4)验证本机能无密码访问
ssh hadoop-master
最后,依次配置h-salve1~3无密码访问
二、hadoop-master本机无密码登录hadoop-slave1、hadoop-slave2、hadoop-slave3,以hadoop-master无密码登录hadoop-slave1为例进行讲解:
1)登录hadoop-slave1 ,复制hadoop-master服务器的公钥”id_rsa.pub”到hadoop-slave1服务器的”root”目录下。
scp root@hadoop-master:/root/.ssh/id_rsa.pub /root/
2)将hadoop-master的公钥(id_rsa.pub)追加到hadoop-slave1的authorized_keys中
cat id_rsa.pub >> .ssh/authorized_keys
rm -rf id_rsa.pub
3)在 hadoop-master上面测试
ssh hadoop-slave1
三、配置hadoop-slave1~hadoop-slave3本机无密码登录hadoop-master
下面以hadoop-slave1无密码登录hadoop-master为例进行讲解,用户需参照下面步骤完成hadoop-slave2~hadoop-slave3无密码登录hadoop-master。
1)登录hadoop-master,复制hadoop-slave1服务器的公钥”id_rsa.pub”到hadoop-master服务器的”/root/”目录下。
scp root@hadoop-slave1:/root/.ssh/id_rsa.pub /root/
2)将hadoop-slave1的公钥(id_rsa.pub)追加到hadoop-master的authorized_keys中。
cat id_rsa.pub >> .ssh/authorized_keys
rm -rf id_rsa.pub //删除id_rsa.pub
3)在 hadoop-slave1上面测试
ssh hadoop-master
依次配置 hadoop-slave2、hadoop-slave3
下面以配置hadoop-master本机无密码登录为例进行讲解,用户需参照下面步骤完成h-salve1~3三台子节点机器的本机无密码登录;
1)生产秘钥
ssh-keygen -t rsa
2)将公钥追加到”authorized_keys”文件
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
3)赋予权限
chmod 600 .ssh/authorized_keys
4)验证本机能无密码访问
ssh hadoop-master
最后,依次配置h-salve1~3无密码访问
二、hadoop-master本机无密码登录hadoop-slave1、hadoop-slave2、hadoop-slave3,以hadoop-master无密码登录hadoop-slave1为例进行讲解:
1)登录hadoop-slave1 ,复制hadoop-master服务器的公钥”id_rsa.pub”到hadoop-slave1服务器的”root”目录下。
scp root@hadoop-master:/root/.ssh/id_rsa.pub /root/
2)将hadoop-master的公钥(id_rsa.pub)追加到hadoop-slave1的authorized_keys中
cat id_rsa.pub >> .ssh/authorized_keys
rm -rf id_rsa.pub
3)在 hadoop-master上面测试
ssh hadoop-slave1
三、配置hadoop-slave1~hadoop-slave3本机无密码登录hadoop-master
下面以hadoop-slave1无密码登录hadoop-master为例进行讲解,用户需参照下面步骤完成hadoop-slave2~hadoop-slave3无密码登录hadoop-master。
1)登录hadoop-master,复制hadoop-slave1服务器的公钥”id_rsa.pub”到hadoop-master服务器的”/root/”目录下。
scp root@hadoop-slave1:/root/.ssh/id_rsa.pub /root/
2)将hadoop-slave1的公钥(id_rsa.pub)追加到hadoop-master的authorized_keys中。
cat id_rsa.pub >> .ssh/authorized_keys
rm -rf id_rsa.pub //删除id_rsa.pub
3)在 hadoop-slave1上面测试
ssh hadoop-master
依次配置 hadoop-slave2、hadoop-slave3
3.Hadoop部署
1、hadoop-master上 解压缩安装包及创建基本目录
#下载
wget http://apache.claz.org/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz
#解压
tar -xzvf hadoop-2.7.3.tar.gz -C /usr/local
#重命名
mv hadoop-2.7.3 hadoop
#下载
wget http://apache.claz.org/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz
#解压
tar -xzvf hadoop-2.7.3.tar.gz -C /usr/local
#重命名
mv hadoop-2.7.3 hadoop
2. ./hadoop/etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop0:9888</value>
<description>WEB端口地址</description>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/opt/module/hadoop-3.3/tmp</value>
<description>tmp文件地址</description>
</property>
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
<description>设置所有IP均可访问</description>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
<description>设置所有用户均可访问</description>
</property>
</configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop0:9888</value>
<description>WEB端口地址</description>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/opt/module/hadoop-3.3/tmp</value>
<description>tmp文件地址</description>
</property>
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
<description>设置所有IP均可访问</description>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
<description>设置所有用户均可访问</description>
</property>
</configuration>
3. ./hadoop/etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/opt/module/hadoop-3.3/tmp/namenode</value>
<description>命名空间和事务在本地文件系统永久存储的路径</description>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/opt/module/hadoop-3.3/tmp/datanode</value>
<description>DataNode在本地文件系统中存放块的路径</description>
</property>
<property>
<name>dfs.namenode.http-address</name>
<value>hadoop0:9870</value>
<description>NameNode地址</description>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop1:9868</value>
<description>secondaryNameNode地址</description>
</property>
</configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/opt/module/hadoop-3.3/tmp/namenode</value>
<description>命名空间和事务在本地文件系统永久存储的路径</description>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/opt/module/hadoop-3.3/tmp/datanode</value>
<description>DataNode在本地文件系统中存放块的路径</description>
</property>
<property>
<name>dfs.namenode.http-address</name>
<value>hadoop0:9870</value>
<description>NameNode地址</description>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop1:9868</value>
<description>secondaryNameNode地址</description>
</property>
</configuration>
4. ./mapred-site.xml
cp ./hadoop/etc/hadoop/mapred-site.xml.template ./hadoop/etc/hadoop/mapred-site.xml
vim ./hadoop/etc/hadoop/mapred-site.xml
vim ./hadoop/etc/hadoop/mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
<description>MapReduce引擎</description>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop0:19888</value>
<description>任务管理器WEB端口</description>
</property>
</configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
<description>MapReduce引擎</description>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop0:19888</value>
<description>任务管理器WEB端口</description>
</property>
</configuration>
5. . /yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop2</value>
<description>yarn主节点指定</description>
</property>
<property>
<name>yarn.application.classpath</name>
<value>/opt/module/hadoop-3.3/etc/hadoop:/opt/module/hadoop-3.3/share/hadoop/common/lib/*:/opt/module/hadoop-3.3/share/hadoop/common/*:/opt/module/hadoop-3.3/share/hadoop/hdfs:/opt/module/hadoop-3.3/share/hadoop/hdfs/lib/*:/opt/module/hadoop-3.3/share/hadoop/hdfs/*:/opt/module/hadoop-3.3/share/hadoop/mapreduce/*:/opt/module/hadoop-3.3/share/hadoop/yarn:/opt/module/hadoop-3.3/share/hadoop/yarn/lib/*:/opt/module/hadoop-3.3/share/hadoop/yarn/*</value>
</property>
</configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop2</value>
<description>yarn主节点指定</description>
</property>
<property>
<name>yarn.application.classpath</name>
<value>/opt/module/hadoop-3.3/etc/hadoop:/opt/module/hadoop-3.3/share/hadoop/common/lib/*:/opt/module/hadoop-3.3/share/hadoop/common/*:/opt/module/hadoop-3.3/share/hadoop/hdfs:/opt/module/hadoop-3.3/share/hadoop/hdfs/lib/*:/opt/module/hadoop-3.3/share/hadoop/hdfs/*:/opt/module/hadoop-3.3/share/hadoop/mapreduce/*:/opt/module/hadoop-3.3/share/hadoop/yarn:/opt/module/hadoop-3.3/share/hadoop/yarn/lib/*:/opt/module/hadoop-3.3/share/hadoop/yarn/*</value>
</property>
</configuration>
6.masters与slaves
workers
hadoop0
hadoop1
hadoop2
hadoop1
hadoop2
3.3只有workers
配置hadoop-slave的hadoop环境
下面以配置hadoop-slave1的hadoop为例进行演示,用户需参照以下步骤完成其他hadoop-slave2~3服务器的配置。
1)复制hadoop到hadoop-slave1节点
scp -r /usr/local/hadoop hadoop-slave1:/usr/local/
登录hadoop-slave1服务器,删除slaves内容
rm -rf /usr/local/hadoop/etc/hadoop/slaves
2)配置环境变量
vi /etc/profile
## 内容
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
使得hadoop命令在当前终端立即生效;
source /etc/profile
依次配置其它slave服务
下面以配置hadoop-slave1的hadoop为例进行演示,用户需参照以下步骤完成其他hadoop-slave2~3服务器的配置。
1)复制hadoop到hadoop-slave1节点
scp -r /usr/local/hadoop hadoop-slave1:/usr/local/
登录hadoop-slave1服务器,删除slaves内容
rm -rf /usr/local/hadoop/etc/hadoop/slaves
2)配置环境变量
vi /etc/profile
## 内容
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
使得hadoop命令在当前终端立即生效;
source /etc/profile
依次配置其它slave服务
集群启动与初始化
1、格式化HDFS文件系统
进入master的~/hadoop目录,执行以下操作
bin/hadoop namenode -format
格式化namenode,第一次启动服务前执行的操作,以后不需要执行。
进入master的~/hadoop目录,执行以下操作
bin/hadoop namenode -format
格式化namenode,第一次启动服务前执行的操作,以后不需要执行。
2、然后启动hadoop:
sbin/start-dfs.sh
sbin/start-yarn.sh
sbin/start-dfs.sh
sbin/start-yarn.sh
3、使用jps命令查看运行情况
#master 执行 jps查看运行情况
25928 SecondaryNameNode
25742 NameNode
26387 Jps
26078 ResourceManager
#slave 执行 jps查看运行情况
24002 NodeManager
23899 DataNode
24179 Jps
#master 执行 jps查看运行情况
25928 SecondaryNameNode
25742 NameNode
26387 Jps
26078 ResourceManager
#slave 执行 jps查看运行情况
24002 NodeManager
23899 DataNode
24179 Jps
4、命令查看Hadoop集群的状态
通过简单的jps命令虽然可以查看HDFS文件管理系统、MapReduce服务是否启动成功,但是无法查看到Hadoop整个集群的运行状态。我们可以通过hadoop dfsadmin -report进行查看。用该命令可以快速定位出哪些节点挂掉了,HDFS的容量以及使用了多少,以及每个节点的硬盘使用情况。
hadoop dfsadmin -report
通过简单的jps命令虽然可以查看HDFS文件管理系统、MapReduce服务是否启动成功,但是无法查看到Hadoop整个集群的运行状态。我们可以通过hadoop dfsadmin -report进行查看。用该命令可以快速定位出哪些节点挂掉了,HDFS的容量以及使用了多少,以及每个节点的硬盘使用情况。
hadoop dfsadmin -report
5、hadoop 重启
sbin/stop-all.sh
sbin/start-all.sh
sbin/stop-all.sh
sbin/start-all.sh
HIVE
环境变量
修改环境变量:
执行命令:vi /etc/profile
export JAVA_HOME=/usr/local/software/jdk1.8.0_66
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export HADOOP_HOME=/usr/local/software/hadoop_2.7.1
export HBASE_HOME=/usr/local/software/hbase_1.2.2
export HIVE_HOME=/usr/local/software/apache-hive-2.3.0-bin
export PATH=.:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$HIVE_HOME/bin:$PATH
执行命令:source /etc/profile 刷新环境变量
————————————————
版权声明:本文为CSDN博主「_否极泰来_」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/yuan_xw/article/details/78197917
执行命令:vi /etc/profile
export JAVA_HOME=/usr/local/software/jdk1.8.0_66
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export HADOOP_HOME=/usr/local/software/hadoop_2.7.1
export HBASE_HOME=/usr/local/software/hbase_1.2.2
export HIVE_HOME=/usr/local/software/apache-hive-2.3.0-bin
export PATH=.:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$HIVE_HOME/bin:$PATH
执行命令:source /etc/profile 刷新环境变量
————————————————
版权声明:本文为CSDN博主「_否极泰来_」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/yuan_xw/article/details/78197917
conf
hive-site.xml
Zookeeper
conf
zoo.cfg
cp zoo_sample.cfg zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/opt/module/apache-zookeeper-3.7.1-bin/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true
#hadoop对应于前面在hosts里面配置的主机映射 2888是数据同步和消息传递端口,3888是选举端口
server.0=hadoop0:2888:2889
#hadoop对应于前面在hosts里面配置的主机映射 2889是数据同步和消息传递端口,3889是选举端口
server.1=hadoop1:2888:2889
#hadoop对应于前面在hosts里面配置的主机映射 2890是数据同步和消息传递端口,3890是选举端口
server.2=hadoop2:2888:2889
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/opt/module/apache-zookeeper-3.7.1-bin/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true
#hadoop对应于前面在hosts里面配置的主机映射 2888是数据同步和消息传递端口,3888是选举端口
server.0=hadoop0:2888:2889
#hadoop对应于前面在hosts里面配置的主机映射 2889是数据同步和消息传递端口,3889是选举端口
server.1=hadoop1:2888:2889
#hadoop对应于前面在hosts里面配置的主机映射 2890是数据同步和消息传递端口,3890是选举端口
server.2=hadoop2:2888:2889
问题记录
zookeeper启动失败:Error: Could not find or load main class org.apache.zookeeper.server.quorum.QuorumPeer
在官网下载包的时候下载的是源码包,没有进行编译过的,所以需要手动进行编译,也可以直接下载官方编译好的(编译好的tar包会带有bin的标识),索性重新下载个编译好的,解压后启动成功
看到这个的时候去查了一下百度一直没有看到正确的解决方案,所以去了zookeeper官网去查,查document,在它的standalone Operation中看到:
在这里插入图片描述
The server is contained in a single JAR,这个服务是包含jar包的。
在这里插入图片描述
The server is contained in a single JAR,这个服务是包含jar包的。
文件权限问题
org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Address unresolved: hadoop0:2889
这个错误是因为 server.1的端口号后面有空格的缘故。
server.#要与myid一一对应
java.net.BindException: Address already in use
Cannot open channel to 2 at election address hadoop2/192.168.56.110:2889
java.net.ConnectException: Connection refused (Connection refused)
java.net.ConnectException: Connection refused (Connection refused)
三台全部启动,status正常
别的服务器未开启服务,当然连不上
修改zoo.cfg本机ip纠正为0.0.0.0
关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld
systemctl disable firewalld
systemctl status firewalld
Hbase
安装hbase
首先在hadoop-master安装配置好之后,在复制到从节点
wget http://mirror.bit.edu.cn/apache/hbase/1.3.1/hbase-1.3.1-bin.tar.gz
#解压
tar -xzvf hbase-1.3.1-bin.tar.gz -C /usr/local/
#重命名
mv hbase-1.3.1 hbase
首先在hadoop-master安装配置好之后,在复制到从节点
wget http://mirror.bit.edu.cn/apache/hbase/1.3.1/hbase-1.3.1-bin.tar.gz
#解压
tar -xzvf hbase-1.3.1-bin.tar.gz -C /usr/local/
#重命名
mv hbase-1.3.1 hbase
环境变量配置
vim /etc/profile
#内容
export HBASE_HOME=/opt/module/hbase-2.4.17
export PATH=$HBASE_HOME/bin:$PATH
#使立即生效
source /etc/profile
export HBASE_HOME=/opt/module/hbase-2.4.17
export PATH=$HBASE_HOME/bin:$PATH
#使立即生效
source /etc/profile
修改系统变量ulimit
ulimit -n 10240
配置文件
/hbase/conf
hbase-env.sh
#内容
export JAVA_HOME=/usr/lib/jvm/jre-1.7.0-openjdk.x86_64
export HBASE_CLASSPATH=/usr/local/hbase/conf
# 此配置信息,设置由hbase自己管理zookeeper,不需要单独的zookeeper。
export HBASE_MANAGES_ZK=true
export HBASE_HOME=/usr/local/hbase
export HADOOP_HOME=/usr/local/hadoop
#Hbase日志目录
export HBASE_LOG_DIR=/usr/local/hbase/logs
export JAVA_HOME=/usr/lib/jvm/jre-1.7.0-openjdk.x86_64
export HBASE_CLASSPATH=/usr/local/hbase/conf
# 此配置信息,设置由hbase自己管理zookeeper,不需要单独的zookeeper。
export HBASE_MANAGES_ZK=true
export HBASE_HOME=/usr/local/hbase
export HADOOP_HOME=/usr/local/hadoop
#Hbase日志目录
export HBASE_LOG_DIR=/usr/local/hbase/logs
hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop-master:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.master</name>
<value>hadoop-master:60000</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hadoop-master,hadoop-slave1,hadoop-slave2,hadoop-slave3</value>
</property>
</configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop-master:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.master</name>
<value>hadoop-master:60000</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hadoop-master,hadoop-slave1,hadoop-slave2,hadoop-slave3</value>
</property>
</configuration>
1.hadoop端口
zookeeper端口
regionservers
hadoop0
hadoop1
hadoop2
hadoop1
hadoop2
复制hbase到从节点中
scp -r /opt/module/hbase-2.4.17 hadoop1:/opt/module/hbase-2.4.17
scp -r /opt/module/hbase-2.4.17 hadoop2:/opt/module/hbase-2.4.17
scp -r /opt/module/hbase-2.4.17 hadoop1:/opt/module/hbase-2.4.17
scp -r /opt/module/hbase-2.4.17 hadoop2:/opt/module/hbase-2.4.17
集群启动
启动hbase
启动仅在master节点上执行即可
~/hbase/bin/start-hbase.sh
启动仅在master节点上执行即可
~/hbase/bin/start-hbase.sh
master中的信息
[hadoop@master ~]$ jps
6225 Jps
2897 SecondaryNameNode # hadoop进程
2710 NameNode # hadoop master进程
3035 ResourceManager # hadoop进程
5471 HMaster # hbase master进程
2543 HQuorumPeer # zookeeper进程
[hadoop@master ~]$ jps
6225 Jps
2897 SecondaryNameNode # hadoop进程
2710 NameNode # hadoop master进程
3035 ResourceManager # hadoop进程
5471 HMaster # hbase master进程
2543 HQuorumPeer # zookeeper进程
错误1,SLF4J,依赖包重复,导致无法正常拉起,删除一个就好
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/module/hadoop-3.3/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/module/hbase-2.4.17/lib/client-facing-thirdparty/slf4j-reload4j-1.7.33.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/module/hadoop-3.3/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/module/hbase-2.4.17/lib/client-facing-thirdparty/slf4j-reload4j-1.7.33.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory]
SLF4J: Found binding in [jar:file:/opt/module/hadoop-3.3/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/module/hbase-2.4.17/lib/client-facing-thirdparty/slf4j-reload4j-1.7.33.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/module/hadoop-3.3/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/module/hbase-2.4.17/lib/client-facing-thirdparty/slf4j-reload4j-1.7.33.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory]
salve中的信息
[hadoop@slave1 ~]$ jps
4689 Jps
2533 HQuorumPeer # zookeeper进程
2589 DataNode # hadoop slave进程
4143 HRegionServer # hbase slave进程
[hadoop@slave1 ~]$ jps
4689 Jps
2533 HQuorumPeer # zookeeper进程
2589 DataNode # hadoop slave进程
4143 HRegionServer # hbase slave进程
子主题
如果安装了独立的zookeeper
启动顺序: hadoop-> zookeeper-> hbase
停止顺序:hbase-> zookeeper-> hadoop
使用自带的zookeeper
启动顺序: hadoop-> hbase
停止顺序:hbase-> hadoop
启动顺序: hadoop-> zookeeper-> hbase
停止顺序:hbase-> zookeeper-> hadoop
使用自带的zookeeper
启动顺序: hadoop-> hbase
停止顺序:hbase-> hadoop
问题记录
HBASE-shell拉起
3210 Jps
[root@hadoop0 hbase-2.4.17]# ssh hadoop1
Last login: Thu May 4 09:20:38 2023 from hadoop0
[root@hadoop1 ~]# jps
1680 SecondaryNameNode
1846 HQuorumPeer
2088 Jps
1609 DataNode
[root@hadoop1 ~]# exit
logout
Connection to hadoop1 closed.
[root@hadoop0 hbase-2.4.17]# ./bin/hbase shell
LoadError: load error: irb/completion -- java.lang.NoSuchMethodError: jline.console.completer.CandidateListCompletionHandler.setPrintSpaceAfterFullCompletion(Z)V
require at org/jruby/RubyKernel.java:974
require at uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/rubygems/core_ext/kernel_require.rb:54
<main> at classpath:/jar-bootstrap.rb:42
[root@hadoop0 hbase-2.4.17]# ssh hadoop1
Last login: Thu May 4 09:20:38 2023 from hadoop0
[root@hadoop1 ~]# jps
1680 SecondaryNameNode
1846 HQuorumPeer
2088 Jps
1609 DataNode
[root@hadoop1 ~]# exit
logout
Connection to hadoop1 closed.
[root@hadoop0 hbase-2.4.17]# ./bin/hbase shell
LoadError: load error: irb/completion -- java.lang.NoSuchMethodError: jline.console.completer.CandidateListCompletionHandler.setPrintSpaceAfterFullCompletion(Z)V
require at org/jruby/RubyKernel.java:974
require at uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/rubygems/core_ext/kernel_require.rb:54
<main> at classpath:/jar-bootstrap.rb:42
还是slf4j不兼容导致
备份hbase的jar包到安全目录!
使用Hadoop的jar包替换hbase的jar包
slf4j包替换
hbase-site.xml配置
org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
以上报错说:Master正在初始化
出现以上错误的原因可能有以下:
1、集群中的节点时间不同步,可以在启动的集群中使用命令行:date,查看各个节点的时间是否同步,如果不同步,可以参考这篇博客进行集群离线状态时间同步的修改https://blog.csdn.net/m0_46413065/article/details/116378004
2、如果以上方式仍然没有效果,可能报错的原因二是:HDFS中和Zookeeper中的HBase没有删除,所以这里需要将其进行删除,具体的命令如下:注意:删除Zookeeper中的 /hbase 目录,需要保证zookeeper已经开启,否则无法连接上。
————————————————
版权声明:本文为CSDN博主「weixin_43648549」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/weixin_43648549/article/details/123615758
出现以上错误的原因可能有以下:
1、集群中的节点时间不同步,可以在启动的集群中使用命令行:date,查看各个节点的时间是否同步,如果不同步,可以参考这篇博客进行集群离线状态时间同步的修改https://blog.csdn.net/m0_46413065/article/details/116378004
2、如果以上方式仍然没有效果,可能报错的原因二是:HDFS中和Zookeeper中的HBase没有删除,所以这里需要将其进行删除,具体的命令如下:注意:删除Zookeeper中的 /hbase 目录,需要保证zookeeper已经开启,否则无法连接上。
————————————————
版权声明:本文为CSDN博主「weixin_43648549」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/weixin_43648549/article/details/123615758
使用方法二创建表格成功
ntp 集群时间对齐
安装ntp
集群中每台机器都需要安装ntp。
3.1查看已经安装版本指令
查看指令:yum list installed | grep ntp
3.2安装ntp指令
安装指令:yum -y install ntp
4.配置ntp服务端
集群中每台机器都需要安装ntp。
3.1查看已经安装版本指令
查看指令:yum list installed | grep ntp
3.2安装ntp指令
安装指令:yum -y install ntp
4.配置ntp服务端
修改机器本地时间为标准时区时间。
vim /etc/ntp.conf
# For more information about this file, see the man pages
# ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5).
driftfile /var/lib/ntp/drift
# Permit time synchronization with our time source, but do not
# permit the source to query or modify the service on this system.
restrict default nomodify notrap nopeer noquery
# Permit all access over the loopback interface. This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
restrict 127.0.0.1
restrict ::1
# Hosts on local network are less restricted.
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 127.127.1.0
fudge 127.127.1.0 stratum 10
#broadcast 192.168.1.255 autokey # broadcast server
#broadcastclient # broadcast client
#broadcast 224.0.1.1 autokey # multicast server
#multicastclient 224.0.1.1 # multicast client
#manycastserver 239.255.254.254 # manycast server
#manycastclient 239.255.254.254 autokey # manycast client
# Enable public key cryptography.
#crypto
includefile /etc/ntp/crypto/pw
# Key file containing the keys and key identifiers used when operating
# with symmetric key cryptography.
keys /etc/ntp/keys
# Specify the key identifiers which are trusted.
#trustedkey 4 8 42
# Specify the key identifier to use with the ntpdc utility.
#requestkey 8
# Specify the key identifier to use with the ntpq utility.
#controlkey 8
# Enable writing of statistics records.
#statistics clockstats cryptostats loopstats peerstats
# Disable the monitoring facility to prevent amplification attacks using ntpdc
# monlist command when default restrict does not include the noquery flag. See
# CVE-2013-5211 for more details.
# Note: Monitoring will not be disabled with the limited restriction flag.
disable monitor
# ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5).
driftfile /var/lib/ntp/drift
# Permit time synchronization with our time source, but do not
# permit the source to query or modify the service on this system.
restrict default nomodify notrap nopeer noquery
# Permit all access over the loopback interface. This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
restrict 127.0.0.1
restrict ::1
# Hosts on local network are less restricted.
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 127.127.1.0
fudge 127.127.1.0 stratum 10
#broadcast 192.168.1.255 autokey # broadcast server
#broadcastclient # broadcast client
#broadcast 224.0.1.1 autokey # multicast server
#multicastclient 224.0.1.1 # multicast client
#manycastserver 239.255.254.254 # manycast server
#manycastclient 239.255.254.254 autokey # manycast client
# Enable public key cryptography.
#crypto
includefile /etc/ntp/crypto/pw
# Key file containing the keys and key identifiers used when operating
# with symmetric key cryptography.
keys /etc/ntp/keys
# Specify the key identifiers which are trusted.
#trustedkey 4 8 42
# Specify the key identifier to use with the ntpdc utility.
#requestkey 8
# Specify the key identifier to use with the ntpq utility.
#controlkey 8
# Enable writing of statistics records.
#statistics clockstats cryptostats loopstats peerstats
# Disable the monitoring facility to prevent amplification attacks using ntpdc
# monlist command when default restrict does not include the noquery flag. See
# CVE-2013-5211 for more details.
# Note: Monitoring will not be disabled with the limited restriction flag.
disable monitor
service ntpd start
systemctl enable ntpd.service
查看当前节点同步的时间服务器
查看当前节点时间同步的时间服务器。
查看指令:ntpq -p
查看当前节点时间同步的时间服务器。
查看指令:ntpq -p
查看ntp端口
查看ntp启动后发布的端口,默认端口:123。
查看指令:netstat -anp | grep ntp
查看ntp启动后发布的端口,默认端口:123。
查看指令:netstat -anp | grep ntp
WEB端口
http://192.168.56.108:60010/master-status
Python远程接口
thrift
HbaseMaster节点安装thrift服务
安装thrift
下载thrift:wget http://mirror.bit.edu.cn/apache/thrift/0.10.0/thrift-0.10.0.tar.gz
tar zvxf thrift-0.10.0.tar.gz
cd thrift-0.10.0/
./configure
sudo make && make install
注:如果报g++: error: /usr/lib64/libboost_unit_test_framework.a: No such file or directory这样的错误则执行以下操作
yum install boost-devel-static
下载thrift:wget http://mirror.bit.edu.cn/apache/thrift/0.10.0/thrift-0.10.0.tar.gz
tar zvxf thrift-0.10.0.tar.gz
cd thrift-0.10.0/
./configure
sudo make && make install
注:如果报g++: error: /usr/lib64/libboost_unit_test_framework.a: No such file or directory这样的错误则执行以下操作
yum install boost-devel-static
no acceptable C compiler found in $PATH
代表你没有安装C编译器
执行 yum -y install gcc-c++命令进行安装,安装完后,输入gcc -v检查是否安装成功,出现下图所示代表安装c编译器成功。
thrift服务
./bin/hbase-daemon.sh start thrift
./bin/hbase-daemon.sh start thrift
jps查看
python客户机安装相关package
pip install thrift
pip install happybase
command ‘gcc‘ failed: No such file or directory
yum install gcc
pip install hbase-python
问题记录
1.import hbase,can't find google
pip install protobuf
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
pip install protobuf==3.20.1
TTransportException: TSocket read 0 bytes
修改connection传输协议,使用TCompactProtocol
脚本示例
from thrift import Thrift
from thrift.transport import TSocket, TTransport
from thrift.protocol import TBinaryProtocol,TCompactProtocol
from hbase import Hbase
from hbase.ttypes import *
import pandas as pd
from hbase.Hbase import *
class hbaseUtils(object):
__slots__ = ['transport', 'client']
# @staticmethod
def __init__(self):
# server端地址和端口,web是HMaster也就是thriftServer主机名,9090是thriftServer默认端口
transport = TSocket.TSocket('192.168.56.108', 9090)
# 可以设置超时
transport.setTimeout(5000)
# 设置传输方式(TFramedTransport或TBufferedTransport)
self.transport = TTransport.TFramedTransport(transport)
# 设置传输协议
protocol = TCompactProtocol.TCompactProtocol(self.transport)
# 确定客户端
self.client = Hbase.Client(protocol)
HB = hbaseUtils()
HB.transport.open()
HB.client.getTableNames()
from thrift.transport import TSocket, TTransport
from thrift.protocol import TBinaryProtocol,TCompactProtocol
from hbase import Hbase
from hbase.ttypes import *
import pandas as pd
from hbase.Hbase import *
class hbaseUtils(object):
__slots__ = ['transport', 'client']
# @staticmethod
def __init__(self):
# server端地址和端口,web是HMaster也就是thriftServer主机名,9090是thriftServer默认端口
transport = TSocket.TSocket('192.168.56.108', 9090)
# 可以设置超时
transport.setTimeout(5000)
# 设置传输方式(TFramedTransport或TBufferedTransport)
self.transport = TTransport.TFramedTransport(transport)
# 设置传输协议
protocol = TCompactProtocol.TCompactProtocol(self.transport)
# 确定客户端
self.client = Hbase.Client(protocol)
HB = hbaseUtils()
HB.transport.open()
HB.client.getTableNames()
截图示例
Docker
简易安装
系统版本和内核版本校验
Docker支持64位版本的CentOS 7和CentOS 8及更高版本,它要求Linux内核版本不低于3.10。
查看Linux版本的命令这里推荐两种:lsb_release -a或cat /etc/redhat-release。
lsb_release -a查看效果:
显然,当前Linux系统为CentOS7。再查一下内核版本是否不低于3.10。
查看内核版本有三种方式:
cat /proc/version
uname -a
uname -r
三种形式都可以查看到内容版本,比如:
查看Linux版本的命令这里推荐两种:lsb_release -a或cat /etc/redhat-release。
lsb_release -a查看效果:
显然,当前Linux系统为CentOS7。再查一下内核版本是否不低于3.10。
查看内核版本有三种方式:
cat /proc/version
uname -a
uname -r
三种形式都可以查看到内容版本,比如:
国内 daocloud一键安装命令:
curl -sSL https://get.daocloud.io/docker | sh
curl -sSL https://get.daocloud.io/docker | sh
官方的一键安装方式:
curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
sudo systemctl start docker
安装完成Docker之后,这里汇总列一下常见的Docker操作命令:
搜索仓库镜像:docker search 镜像名
拉取镜像:docker pull 镜像名
查看正在运行的容器:docker ps
查看所有容器:docker ps -a
删除容器:docker rm container_id
查看镜像:docker images
删除镜像:docker rmi image_id
启动(停止的)容器:docker start 容器ID
停止容器:docker stop 容器ID
重启容器:docker restart 容器ID
启动(新)容器:docker run -it ubuntu /bin/bash
进入容器:docker attach 容器ID或docker exec -it 容器ID /bin/bash,推荐使用后者。
更多的命令可以通过docker help命令来查看。
搜索仓库镜像:docker search 镜像名
拉取镜像:docker pull 镜像名
查看正在运行的容器:docker ps
查看所有容器:docker ps -a
删除容器:docker rm container_id
查看镜像:docker images
删除镜像:docker rmi image_id
启动(停止的)容器:docker start 容器ID
停止容器:docker stop 容器ID
重启容器:docker restart 容器ID
启动(新)容器:docker run -it ubuntu /bin/bash
进入容器:docker attach 容器ID或docker exec -it 容器ID /bin/bash,推荐使用后者。
更多的命令可以通过docker help命令来查看。
JupyterNotebook
https://jupyter-docker-stacks.readthedocs.io/en/latest/using/selecting.html
jupyter/tensorflow-notebook
docker pull jupyter/base-notebook:latest 拉取镜像
docker run --rm -p 8888:8888 jupyter/base-notebook:latest 运行镜像
3. 这样虽然能访问,但是我想将notebooks的根目录映射到本地。但是从上面的启动日志来看,默认的根目录在/home/jovyan,这个目录包含很多隐藏文件,映射时有些文件会报错,所以我们需要修改jupyter notebook的工作目录。
4. 修改设置参数的命令格式:docker run -p 8888:8888 jupyter/base-notebook start-notebook.sh --NotebookApp.password='sha1:74ba40f8a388:c913541b7ee99d15d5ed31d4226bf7838f83a50e'
就是在后面跟start-notebook.sh 然后在加上参数和值。
参数列表请看:https://jupyter-notebook.readthedocs.io/en/stable/config.html
————————————————
版权声明:本文为CSDN博主「Qwertyuiop2016」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/Qwertyuiop2016/article/details/120439121
4. 修改设置参数的命令格式:docker run -p 8888:8888 jupyter/base-notebook start-notebook.sh --NotebookApp.password='sha1:74ba40f8a388:c913541b7ee99d15d5ed31d4226bf7838f83a50e'
就是在后面跟start-notebook.sh 然后在加上参数和值。
参数列表请看:https://jupyter-notebook.readthedocs.io/en/stable/config.html
————————————————
版权声明:本文为CSDN博主「Qwertyuiop2016」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/Qwertyuiop2016/article/details/120439121
NotebookApp.password:notebook访问密码
NotebookApp.allow_password_change:是否允许远程修改密码
NotebookApp.allow_remote_access:这个不知道是啥意思,反正我每次都加了
NotebookApp.open_browser:是否打开浏览器,这个在容器里默认就是False,所以可以不加
NotebookApp.notebook_dir:notebook工作目录
————————————————
版权声明:本文为CSDN博主「Qwertyuiop2016」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/Qwertyuiop2016/article/details/120439121
NotebookApp.allow_password_change:是否允许远程修改密码
NotebookApp.allow_remote_access:这个不知道是啥意思,反正我每次都加了
NotebookApp.open_browser:是否打开浏览器,这个在容器里默认就是False,所以可以不加
NotebookApp.notebook_dir:notebook工作目录
————————————————
版权声明:本文为CSDN博主「Qwertyuiop2016」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/Qwertyuiop2016/article/details/120439121
conda
https://repo.anaconda.com/archive/
$ sh Anaconda3-2022.05-Linux-x86_64.sh
source /dellfsqd2/ST_LBI/USER/myname/app/conda/anaconda3/bin/activate
$ conda init
$ conda init
$ conda create --name snowflakes
子主题
PIP
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple package_name
清华:https://pypi.tuna.tsinghua.edu.cn/simple
阿里云:http://mirrors.aliyun.com/pypi/simple/
中国科技大学 https://pypi.mirrors.ustc.edu.cn/simple/
华中科技大学:http://pypi.hustunique.com/
山东理工大学:http://pypi.sdutlinux.org/
豆瓣:http://pypi.douban.com/simple/
中科大:https://pypi.mirrors.ustc.edu.cn/simple/
阿里云:http://mirrors.aliyun.com/pypi/simple/
中国科技大学 https://pypi.mirrors.ustc.edu.cn/simple/
华中科技大学:http://pypi.hustunique.com/
山东理工大学:http://pypi.sdutlinux.org/
豆瓣:http://pypi.douban.com/simple/
中科大:https://pypi.mirrors.ustc.edu.cn/simple/
收藏
0 条评论
下一页