distcp from Hadoop cluster to AWS S3 bucket
This article may help you to copy data between on Premises hadoop cluster and AWS S3 bucket.
Knowledge is Power
This article may help you to copy data between on Premises hadoop cluster and AWS S3 bucket.
Data import using Sqoop Teradata example $ sqoop import –connect jdbc:teradata://server_name/DATABASE=db1,LOGMECH=LDAP,CHARSET=UTF8 –driver “com.teradata.jdbc.TeraDriver” –username <user1> –password <password1> –query “select a.*, b.* from a inner join b on a.a_id=b.a_id where \$CONDITIONS AND a.f1 in (‘A’,’B’,’C’) group by 1” –null-string ‘\\N’ –null-non-string...
Want to write a Hadoop program in less than 5 minutes? Get in here for a quick check on how it’s done. We use Python and Hadoop streaming to complete the task.
At times some directories on hdfs has too many inodes (files and folders) and it is really hard to delete. Some instances also lead to out of memory (OOM) errors such as the following error, INFO retry.RetryInvocationHandler: java.io.IOException: com.google.protobuf.ServiceException: java.lang.OutOfMemoryError:...
Got in a situation where you were asked to extract hive queries and the time they took to execute? Steps On log files run below 2 extracts awk ‘match($0, “^([^ ]+).*Completed executing command\\(queryId=([0-9a-z_-]+)\\); Time taken: (.*)”, a) {print “COMPLETE\t” a[1]...
Below is data storage estimator based on message size and throughput. Input HDFS replication and the amount of data end users will generate over time using raw data. Hope this helps you. Data calculator Message size in bytes Message per...
Source: https://snakebite.readthedocs.io/en/latest/client.html Example: >>> from snakebite.client import Client >>> client = Client(“localhost”, 8020, use_trash=False) >>> for x in client.ls([‘/’]): … print x Warning Many methods return generators, which mean they need to be consumed to execute! Documentation will explicitly...
Lookup yarn queue of a user from bash This bash function will lookup capacity scheduler XML and return queues for the user getYarnQueue() { grep $1 -B 1 /etc/hadoop/conf/capacity-scheduler.xml | awk -F’.’ ‘/name/{print $(NF-1)}’ } Works on Hortonworks HDP. Usage:...
Okay, so msck repair is not working and you saw something as below, 0: jdbc:hive2://hive_server:10000> msck repair table mytable; Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask (state=08S01,code=1) It seems to appear because of higher...
Source: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.1/bk_administration/content/distcp_between_ha_clusters.html DistCp Between HA Clusters To copy data between HA clusters, use the dfs.internal.nameservices property in the hdfs-site.xml file to explicitly specify the name services belonging to the local cluster, while continuing to use the dfs.nameservices property to specify all of the name services in...
More