Expat-IT Tech Bits




Search this site:


/ (289)
  Admin/ (123)
    Apache/ (10)
      HTTPS-SSL/ (4)
      PHP/ (3)
      performance/ (2)
    Cherokee/ (1)
    LAN/ (4)
    LVM/ (6)
    Monitoring/ (2)
      munin/ (2)
    SSH/ (6)
    SSL/ (1)
    Samba/ (1)
    VPN-options/ (7)
      OpenVPN/ (1)
      SSH-Proxy/ (3)
      Tinc/ (1)
      sshuttle/ (1)
    backups/ (17)
      SpiderOak/ (1)
      backuppc/ (5)
      dirvish/ (1)
      misc/ (6)
      rdiff-backup/ (1)
      rsync/ (1)
      unison/ (2)
    commandLine/ (24)
      files/ (8)
      misc/ (10)
      network/ (6)
    crontab/ (1)
    databases/ (15)
      MSSQL/ (2)
      MySQL/ (8)
      Oracle/ (3)
      PostgreSQL/ (1)
    dynamicDNS/ (2)
    email/ (11)
      Dovecot/ (1)
      deliverability/ (1)
      misc/ (1)
      postfix/ (7)
      puppet/ (1)
    iptables/ (3)
    tripwire/ (1)
    virtualization/ (9)
      VMware/ (1)
      virtualBox/ (8)
  Coding/ (14)
    bash/ (1)
    gdb/ (1)
    git/ (3)
    php/ (5)
    python/ (4)
      Django/ (2)
  Education/ (1)
  Hosting/ (27)
    Amazon/ (18)
      EBS/ (3)
      EC2/ (10)
      S3/ (1)
      commandline/ (4)
    Godaddy/ (2)
    NearlyFreeSpeech/ (3)
    Rackspace/ (1)
    vpslink/ (3)
  Linux/ (31)
    Android/ (1)
    Awesome/ (3)
    CPUfreq/ (1)
    China/ (2)
    Debian/ (8)
      APT/ (3)
      WPA/ (1)
    audio/ (1)
    encryption/ (3)
    fonts/ (1)
    misc/ (6)
    remoteDesktop/ (1)
    router-bridge/ (3)
  SW/ (45)
    Micro$soft/ (1)
    browser/ (2)
      Chrome/ (1)
      Firefox/ (1)
    business/ (28)
      Drupal/ (9)
      KnowledgeTree/ (6)
      Redmine/ (2)
      SugarCRM/ (7)
      WebERP/ (2)
      WordPress/ (1)
      eGroupware/ (1)
    chat/ (1)
    email/ (1)
    fileSharing/ (2)
      btsync/ (1)
      mldonkey/ (1)
    graphics/ (2)
    research/ (2)
    website/ (6)
      blog/ (6)
        blosxom/ (3)
        rss2email/ (1)
        webgen/ (1)
  Security/ (15)
    IMchat/ (2)
    circumvention/ (2)
    cryptoCurrency/ (1)
    e-mail/ (4)
    greatFirewall/ (1)
    hacking/ (1)
    password/ (1)
    privacy/ (2)
    skype/ (1)
  Services/ (1)
    fileSharing/ (1)
  TechWriting/ (1)
  xHW/ (14)
    Lenovo/ (1)
    Motorola_A1200/ (2)
    Thinkpad_600e/ (1)
    Thinkpad_a21m/ (3)
    Thinkpad_i1300/ (1)
    Thinkpad_x24/ (1)
    USB_audio/ (1)
    scanner/ (1)
    wirelessCards/ (2)
  xLife/ (17)
    China/ (9)
      Beijing/ (5)
        OpenSource/ (3)
    Expatriation/ (1)
    Vietnam/ (7)


  • 2019/06
  • 2016/07
  • 2016/05
  • 2016/02
  • 2016/01
  • 2015/12
  • 2015/11
  • 2015/06
  • 2015/01
  • 2014/12
  • 2014/11
  • 2014/10
  • 2014/09
  • 2014/07
  • 2014/04
  • 2014/02
  • 2014/01
  • 2013/12
  • 2013/10
  • 2013/08
  • 2013/07
  • 2013/06
  • 2013/05
  • 2013/04
  • 2013/02
  • 2013/01
  • 2012/12
  • 2012/10
  • 2012/09
  • 2012/08
  • 2012/07
  • 2012/06
  • 2012/05
  • 2012/04
  • 2012/03
  • 2012/01
  • 2011/12
  • 2011/11
  • 2011/10
  • 2011/09
  • 2011/08
  • 2011/07
  • 2011/06
  • 2011/05
  • 2011/04
  • 2011/02
  • 2010/12
  • 2010/11
  • 2010/10
  • 2010/09
  • 2010/08
  • 2010/07
  • 2010/06
  • 2010/05
  • 2010/04
  • 2010/03
  • 2010/02
  • 2010/01
  • 2009/12
  • 2009/11
  • 2009/10
  • 2009/09
  • 2009/08
  • 2009/07
  • 2009/06
  • 2009/05
  • 2009/04
  • 2009/03
  • 2009/02
  • 2009/01
  • 2008/12
  • 2008/11
  • 2008/10
  • 2008/09
  • Subscribe XML RSS Feed

    Creative Commons License
    This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

    This site has no ads. To help with hosting, crypto donations are accepted:
    Bitcoin: 1JErV8ga9UY7wE8Bbf1KYsA5bkdh8n1Bxc
    Zcash: zcLYqtXYFEWHFtEfM6wg5eCV8frxWtZYkT8WyxvevzNC6SBgmqPS3tkg6nBarmzRzWYAurgs4ThkpkD5QgiSwxqoB7xrCxs

    Mon, 24 Jun 2019

    /Admin/VPN-options: Wireguard VPN

    Wireguard VPN so far penetrates the Great FireWall for me quite reliably. (I am currently using the Digital Ocean data center in Singapore.)

    Easily provision / reprovision a temporary Wireguard service on a cloud server using Algo:


    If you are using Digital Ocean, very easily turn your VPN server on (less than one cent per hour, max five dollars per month) and off (costs almost nothing to hold in a standby "off" state) using the "DO Swimmer" app off of Fdroid:


    To use WireGuard on Android, install the WireGuard app


    and simply scan the server-specific QR code provided by Algo. To use WireGuard on Debian unstable,

    `apt-get install wireguard wireguard-dkms wireguard-tools`

    Then grab the debian.conf (or whatever you called it) config file from Algo and copy it to (for instance)


    Turn on your Debian wireguard VPN (as root) with

    `wg-quick up wg0`

    and observe the wg0 interface in ifconfig output. wg-quick automatically sets up a default route to the wg0 connection.

    WireGuard is coming soon to the Qubes kernel, test for kernel readiness in Qubes with:

    `ip link add dev wg0 type wireguard`

    posted at: 06:04 | path: /Admin/VPN-options | permanent link to this entry

    Mon, 17 Jun 2019

    /Admin/commandLine/files: search-replace-multiple-files
    Search & Replace Text

    A single file:

    sed -i 's/maverick/natty/g' /etc/apt/sources.list

    All the files in a directory:

    for file in *.txt; do mv "${file}" "${file%.*}.md"; done

    In multiple files and subdirectories:

    perl -e "s/OLDSTRING/NEWSTRING/g;" -pi.save $(find /path/to/directory/to/be/searched -type f)

    First install the Perl script "rename":

    find . -name "*txt" -exec rename 's/\.txt$/\.md/' {} \;

    Note that "." below seems to include hidden files. Replace "." with "*" and hidden files are not included.

    grep -rl OLDSTRING . | xargs perl -pi~ -e 's/OLDSTRING/NEWSTRING/'

    posted at: 07:05 | path: /Admin/commandLine/files | permanent link to this entry

    Mon, 02 May 2016

    /Admin/SSL: Free (Website) SSL with LetsEncrypt

    Last I checked reading about LetsEncrypt[1] can make one a bit dizzy, but if you follow these steps it is really very straight forward:

    On the subject of renewals, as I recall every issued certificate expires after three months, and becomes eligible for renewal after two months. A two week period seems just about right.

    [1] https://letsencrypt.org/

    posted at: 07:54 | path: /Admin/SSL | permanent link to this entry

    Wed, 17 Feb 2016

    /Admin/VPN-options/Tinc: Simple Tinc VPN

    Tinc[1] is a rare animal, an actual peer-to-peer VPN that (for *NIX users) is easy to setup, not widely used and so (as far as I am aware) not blocked by anyone, including the GFW (Great Firewall of China).

    My main OS is Debian, so this example of a very simple tinc configuration will follow Debian standards in getting two Tinc VPN nodes talking to one another -- typically, one would be your Desktop, and the other would be a server with a public IP address running Squid.

    apt-get install tinc
    This is all that /etc/tinc contains after install:

    Tinc can run multiple daemons, each handling a separate Tinc network on a separate subnet. To have each tinc network started automatically, simply add the network name to a list of same in nets.boot

    Each tinc network is represented by a separate directory under /etc/tinc/. Each Tinc network also requires a hosts subdirectory where the public keys for other peers in this network are held. For the simplest possible configuration, here are the main decisions to make:

    Let's first configure your desktop:

    Create the requisite directory structure:

    mkdir -p /etc/tinc/myvpn/hosts
    And create the two configuration files for this network:
    vi /etc/tinc/myvpn/tinc-up
    containing something like this
    modprobe tun
    ifconfig myvpn netmask is the private tinc IP address of the node you are currently configuring. And
    vi /etc/tinc/myvpn/tinc.conf
    containing this:
    Name = mydesk
    Device = /dev/net/tun
    Port = 19001
    ConnectTo = myremote

    The Port line is optional, if omitted tinc will listen on the default port 655.

    Create your keys for the myvpn network (each separate tinc network/subnet has different keys) for the desktop node by running this on it:

    tincd -K -n myvpn
    (If things are correctly configured you should be able to just accept the defaults.) This is the pair of keys you just created:
    The former is your private key, the latter is your public key. Now edit the public key:
    vi /etc/tinc/myvpn/hosts/mydesk
    DO NOT modify the key, but add this config block ABOVE the key:
    Subnet =
    Address = x.x.x.x 19001

    The top line is the VPN private IP of the node, the bottom line is the real world (public, but not necessarily) IP and port where OTHER peers in this tinc network will find the machine. You will be sharing this file with all other peers in this network, and this config block tells them where to find this node.


    chown -R root: /etc/tinc/
    chmod a+rx /etc/tinc/myvpn/tinc-*
    tinc-* are scripts that must be executable, otherwise your configuration will subtly break. Now start tinc:
    systemctl start tinc.service

    If all goes well, ifconfig should show a myvpn device with IP

    Configure the remote machine:

    It is the same as the above desktop config, with these exceptions:

    Putting it together:

    Once tinc is running on the server, copy the public tinc key of each machine into the tinc hosts directory of the other machine.

    Make sure port 19001 is open in the firewall on the myremote end.

    Restart tinc on both ends:

    systemctl restart tinc.service

    and you should now be able to ping the tinc IP of the other machine from both ends.

    This delivers to you a secure connection between desktop and a remote machine. If you would like to proxy browser traffic from mydesk through myremote, just install squid on myremote and enable connections from the tinc subnet. In your browser, set the proxy IP to the myremote tinc IP, port 3128 (default squid port), and select a proxy type of socks v5.

    [1] http://www.tinc-vpn.org/

    posted at: 08:17 | path: /Admin/VPN-options/Tinc | permanent link to this entry

    Thu, 04 Feb 2016

    /Admin/databases/MySQL: Master-Slave Replication: First Steps


    There are a lot of search engine hits for this subject, I liked Rackspace's contribution[1] the best personally. I am going to improvise off of that document. And I am going to assume there is an existing MySQL server that will be the master that we will replicate. First of all, the new master will need a 'slave' user:

    grant replication slave on *.* TO slave_user@'ip-address' identified by 'password';
    flush privileges;

    And on a Debian / Ubuntu server, make these changes to /etc/mysql/my.cnf:

    # bind-address =
    server-id = 1
    log_bin = /var/log/mysql/mysql-bin.log
    expire_logs_days = 15
    max_binlog_size = 200M
    binlog_ignore_db = mysql

    Then restart MySQL. We comment out bind-address to permit non-localhost connections to the MySQL master from the slave. Both master and slave need a defined server-id, and they need to be different. log_bin is where the master records transactions that the slave will later pickup. The rest should be self-explanatory, except to say that in my setup binlog_ignore_db seems to be ignored. I wish it was not, but so far no major consequences.


    Before replication can be started, the databases on both ends need to be exactly the same. On the master:


    The first line puts all master databases into read-only mode, and the second line will print out the file name and position (an integer) at which the binlog's record of writes to the database stopped. It is very important to record these two values, as they will be needed later on the slave. Now dump all the databases except (optionally) mysql and (not optionally) information_schema and performance_schema (the latter two are internal MySQL things that do not replicate). First get a list of all databases:

    mysql -uroot -p -e 'show databases' --skip-column-names | tr '\n' ' '

    Edit the above list to remove mysql, information_schema and performance_schema, and then dump all databases:

    mysqldump -uroot -p --databases list-of-databases | gzip > alldbs.sql.gz

    Don't forget to release the read lock on the master and resume normal operation!!!:


    Copy alldbs.sql.gz to the slave server.


    Install mysql-server on the slave, and make these changes to /etc/mysql/my.cnf:

    # bind-address =
    tmpdir = /var/tmp
    server-id = 2

    and restart MySQL. Allowing non-localhost connections on the slave is actually not necessary for replication, but will be necessary later for the tools we will use for integrity checking and repairs. The tmpdir must be preserved through reboots, so we have moved it from /tmp to /var/tmp (and installed tmpreaper to keep it clean). Now import the dump of databases from the master:

    zcat alldbs.sql.gz | mysql -uroot -p

    And start replication:

    CHANGE MASTER TO MASTER_HOST='master-ip-address', MASTER_USER='slave_user', MASTER_PASSWORD='password', MASTER_LOG_FILE='filename', MASTER_LOG_POS=123456;

    start slave;

    where MASTER_LOG_FILE and MASTER_LOG_POS are the values that you recorded on the master when you locked the databases and issued a "SHOW MASTER STATUS" command, and slave_user is the user you created earlier on the master. Now check replication status:

    show slave status\G

    The field of particular interest is Seconds_Behind_Master. If things are working properly that integer should become smaller quite rapidly as the slave catches up with the master. Eventually that integer should get down very close to zero, if not zero. I am almost always seeing zero with my setup.

    Something you will want to verify after master and slave are synced is a slave reboot. You should find that after a reboot Seconds_Behind_Master quickly returns to zero and replication continues uninterrupted.

    [1] http://www.rackspace.com/knowledge_center/article/set-up-mysql-master-slave-replication

    posted at: 00:48 | path: /Admin/databases/MySQL | permanent link to this entry

    Thu, 14 Jan 2016

    /Admin/databases/MySQL: MySQL User Management

    To create an typical, "ordinary" MySQL account with full access to one database:

    mysql> GRANT ALL on databasename.* TO 'typicaluser'@'localhost' IDENTIFIED BY 'thispassword' REQUIRE SSL;

    Note that the "REQUIRE SSL" is optional (and requires that SSL be first setup, which it is not by default). "localhost" may be replaced by an IP address, or by a '%' to permit access from anywhere. The '*' may be replaced by a specific table name, to grant access to only one table.

    To create a root / superuser with full privileges to everything and all databases use one of these methods:

    mysql> GRANT SUPER ON *.* TO user@'localhost' IDENTIFIED BY 'password';

    To see any given user's privileges:

    show grants for user@localhost;

    Theoretically one can rescind any of those grants with a REVOKE statement. Follow any and all of this privilege modification stuff by:


    And finally, to remove a user:

    mysql> DROP USER 'privileged'@'';

    Note that many (including myself in the past) would say that deleting the user record from the mysql.user table is how one goes about this. In fact, if you do this, and then click the "Privileges" tab in phpmyadmin, you will probably find that user still listed. Use "DROP USER" instead, to delete all mention of this user from all privilege tables.

    posted at: 02:21 | path: /Admin/databases/MySQL | permanent link to this entry

    Sun, 27 Dec 2015

    /Admin/commandLine/network: Port/Firewall Testing with Netcat and Socat

    To avoid application-related issues and just do a bare-bones test of whether or not a port is open, use netcat or socat. On the server end, turn off the application that should be listening on port 9092, then:


    Start listening on the server side:

    netcat -l -p 9092 (newer versions of netcat)
    netcat -l 9092 (older versions)

    Attempt to connect to the port from the client on the other side of the firewall:

    nc IPADDRESS 9092

    If everything is working, anything you type on either end should be mirrored on the other end.


    Start listening on the server side:

    socat TCP4-LISTEN:9092,reuseaddr,fork gopen:/tmp/capture,seek-end=0,append

    Now send some text from the client end to the listener:

    date | socat STDIO tcp:localhost:9092

    In this example, the client is on the same machine, replace "localhost" with an IP or DNS if not. All text sent from the client side will be appended to the file /tmp/capture.

    posted at: 02:44 | path: /Admin/commandLine/network | permanent link to this entry

    Thu, 24 Dec 2015

    /Admin/Apache/PHP: Best Practices for Custom PHP Parameters

    First, do not edit php.ini, instead add a file to


    called something like php_ini_local.ini, for instance. Place parameters you would like to customize like upload_max_filesize in there, and they will take precedence over those in the php.ini file(s). And this way php.ini will in future also upgrade gracefully to newer versions without manual intervention.

    To test parameter changes (before and after) the simple way, use PHP CLI in a terminal:

    # php -a <?php echo ini_get('upload_max_filesize'); ?> Ctrl-d

    posted at: 03:47 | path: /Admin/Apache/PHP | permanent link to this entry

    Tue, 15 Dec 2015

    /Admin/virtualization/virtualBox: Increase or Decrease Storage Space for a VirtualBox VM

    Increasing is nearly trivial:

    VBoxManage modifyhd virtualBoxUbuntu.vdi --resize 15000

    for a 15G drive. Then inside the running VM use gparted to increast the main partition size to use all the newly added space.

    To decrease, one can only reduce the amount of variable space used by the image by freeing up all unused space, apparently one cannot reduce the upper limit.

    To free up space, delete unwanted files on the VM, install the zerofree package, then reboot into recovery mode and mount the root (main) partition ro:

    mount -n -o remount,ro -t ext3 /dev/sda1 /

    Then zero out the unused space in the partition:

    zerofree -v /dev/sda1

    And compact the image:

    VBoxManage modifyhd virtualBoxUbuntu.vdi --compact

    You should find the file system that the vdi is located in now has a lot more free space.

    [1] http://dantwining.co.uk/2011/07/18/how-to-shrink-a-dynamically-expanding-guest-virtualbox-image/

    posted at: 02:38 | path: /Admin/virtualization/virtualBox | permanent link to this entry

    Tue, 01 Dec 2015

    /Admin/databases/Oracle: Resetting Administrative Passwords in Oracle

    Login to the server as the "oracle" user (typically).

    Find the sqlplus binary, and login as SYS, for instance:

    /home/oracle/app/oracle/product/12.1.0/dbhome_1/bin/sqlplus "/ as sysdba"

    Then change the SYSTEM password:

    SQL> show user
    SQL> passw system
    SQL> quit
    Then login as SYSTEM:
    /home/oracle/app/oracle/product/12.1.0/dbhome_1/bin/sqlplus "system/password"

    and change the SYS password:

    SQL> show user
    SQL> passw sys
    SQL> quit

    [1] https://rolfje.wordpress.com/2007/01/16/lost-oracle-sys-and-system-password/

    posted at: 02:11 | path: /Admin/databases/Oracle | permanent link to this entry

    Mon, 23 Nov 2015

    /Admin/databases/MySQL: MySQL Replication Error Recovery


    Sometimes when replaying the binary log, the slave will come across something that stops it cold, and it will go no further. One way to get around this is to start over and completely reinstall. Another way is:


    The result of the above is to simply skip the problematic instruction in the log, and go on. This might result in a table that is slightly out of sync, which can be dealt with by other means (see later in this post).

    If the above does not work, and specifically if you see this error:

    Could not parse relay log event entry. The possible reasons are: the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log), the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log), a network problem, or a bug in the master's or slave's MySQL code. If you want to check the master's binary log or slave's relay log, you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave.

    the "Could not parse relay log event entry" is suggestive of a solution[1]: wipe out the relay log and re-create it from wherever the slave is currently at in the master log:

    mysql> STOP SLAVE; CHANGE MASTER TO MASTER_LOG_FILE = 'mysql-bin.000012', MASTER_LOG_POS = 148376500; START SLAVE;


    MASTER_LOG_POS is Exec_Master_Log_Pos, and
    MASTER_LOG_FILE is Relay_Master_Log_File

    in the current output of "SHOW SLAVE STATUS".

    Another possibility[2], especially after the disorderly shutdown of a slave which is reputed to throw position in the relay log out of whack, resulting in a replay of already-run transactions and many inserts then throwing a duplicate key error, is to temporarily ignore replication errors. Ie. add this

    slave-skip-errors = all

    and let the slave run for a few minutes before turning it off again.

    [1] https://stackoverflow.com/questions/12097696/mysql-replication-fails-with-error-could-not-parse-relay-log-event-entry
    [2] https://www.percona.com/blog/2009/03/04/making-replication-a-bit-more-reliable/

    posted at: 04:46 | path: /Admin/databases/MySQL | permanent link to this entry

    /Admin/databases/MySQL: Integrity Checking / Syncing MySQL Tables / Databases Using Percona Tools

    Install the Percona Toolkit:

    apt-get install percona-toolkit

    in order to gain access to pt-table-sync and pt-table-checksum. Then check for tables that are out of sync with:

    /usr/bin/pt-table-checksum --quiet --ignore-databases mysql,performance_schema,information_schema -umaster_user -ppassword

    (Note: if databases that are not being replicated are not excluded from pt-table-checksum, it can hang up indefinitely.)

    Then on the MySQL master, force a table on the slave into the exact state of that on the master with:

    pt-table-sync --execute h=localhost,D=databasename,t=tablename h=slaveIP -umaster_user -ppassword

    Or force a database on the slave into the exact state of that on the master with:

    pt-table-sync --execute h=localhost h= --databases databasename -umaster_user -ppassword

    posted at: 04:44 | path: /Admin/databases/MySQL | permanent link to this entry

    Tue, 09 Jun 2015

    /Admin/commandLine/misc: Send E-mail From the Command Line

    echo "This will go into the body of the mail." | mail -s "Hello world" you@youremail.com
    Or, if you wish to send to a remote SMTP server, install the heirloom-mailx package and:
    export smtp=host:port
    mailx -s "some subject" address@example.com
    some random body text

    This also works well in a cron job.

    posted at: 21:57 | path: /Admin/commandLine/misc | permanent link to this entry

    Thu, 08 Jan 2015

    /Admin/commandLine/network: Configuring and Testing Proxies on the Command Line

    Much *nix software will obey the proxy environment variables:

    export http_proxy=
    export https_proxy=

    for a squid proxy on IP, for instance. (Use "unset" to turn these environment variables off later.) You can then test with and without the proxy thusly, for example:

    w3m http://google.com/
    w3m -no-proxy http://google.com/

    Or you can test directly with telnet:

    telnet 3128

    which in the case of squid will result in following output:

    Escape character is '^]'.

    And then enter the URL you wish to access over the proxy:

    GET http://www.google.com/ HTTP/1.1

    followed by this line:


    followed by two returns, and the site html will be dumped in your terminal.

    posted at: 04:16 | path: /Admin/commandLine/network | permanent link to this entry

    Sat, 13 Dec 2014

    /Admin/databases/Oracle: Importing a Dump File in Oracle

    In Oracle, there is an old way to import ("imp") and a new way ("impdp"). Here I have examples of both.

    There is a one-to-one relationship between users and schemas (="database"), but not all users have schemas. The first step in creating a new schema/database is to create a new user (C## prefix required):

    SQL> create user C##anewuser identified by anewuser;

    Grant some minimal privileges:

    SQL> grant connect, create session, imp_full_database to C##anewuser;
    SQL> alter user C##anewuser quota 200M on users;

    Prepare to import the dump file:

    su oracle
    cd /home/oracle/app/oracle/product/12.1.0/dbhome_1/bin

    Import an old-style dump file:

    ./imp \"/ as sysdba\" file=/home/oracle/app/oracle/admin/orcl/dpdump/dumpfile.dmp fromuser=olduser touser=C##anewuser log=/tmp/dumpfile.log;

    Import an new-style dump file:

    ./impdp C##newuser/password dumpfile=expdp_dumpfile.dmp remap_schema=olduser:C##newuser log=expdp_dumpfile.log full=y;

    Some gotchas..... Both of the above require knowing the old schema name, which is not terribly easy to figure out. Try leaving out the fromuser/touser/remap_schema stuff on the first pass and just look at the error messages in the log. Sometimes an import is looking for a tablespace that does not exist. Create the stupid thing:


    and then re-run the import.

    Now verify new schema shows up in list of existing schemas:

    SQL> select distinct owner from dba_objects;

    Verify tables exist in newly imported schema:


    Verify a table has columns:

    SQL> describe C##anewuser.sometablename;

    Verify a table has contents:

    SQL> select * from C##anewuser.sometablename;

    posted at: 05:20 | path: /Admin/databases/Oracle | permanent link to this entry