Preamble
Fortune is a simple program that displays a pseudorandom message from a database of quotations. In this blog, I am going to shown you that those electronic cookies -random poignat, inspirational, silly or snide phrases – can make your digital life surprisingly fun, just like a fortune cookies after meal.
Installation -fortune, cow and pony
Following installation guide is based on Arch Official site.
Display a piece of ASCII image
Bash can be made to display a piece of ASCII images, and display at a terminal on login, with following:
1 2 3 4 |
~/.bashrc cat /path/to/text/file/ |
Display fortune cookie phrases with fortune-mode
1 2 3 |
sudo pacman -S fortune-mode |
To have a random phrase displayed when logging into a terminal, use:
1 2 3 4 5 |
~/.bashrc ----------------------- command fortune |
Combine both with a Cow
1 2 3 |
pacman -S cowsay |
and you can make the cow deliver the fortune cookie for you; or pick a different one from the list located in /usr/share/cows.
1 2 3 |
cowsay -l |
or, pick an ASCII image from online or design one yourself:
1 2 3 |
cowsay -f /path/to/file $(fortune) |
add to your ~/.bashrc:
1 2 3 |
command cowthink $(fortune) |
1 2 3 4 5 6 7 8 9 10 11 12 13 |
_______________________________________ / Forecast, n.: A prediction of the \ | future, based on the past, for which | | the forecaster demands payment in the | \ present. / --------------------------------------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || || |
Nice Eye Candy with more complex lines
1 2 3 4 |
command fortune -a | fmt -80 -s | $(shuf -n 1 -e cowsay cowthink) -$(shuf -n 1 -e b d g p s t w y) -f $(shuf -n 1 -e $(cowsay -l | tail -n +2)) -n fortune -c | cowthink -f $(find /usr/share/cows -type f | shuf -n 1) #random cows |
1 2 3 4 5 6 7 8 9 10 11 12 13 |
_________________________________________________ / Adolescence, n.: \ \ The stage between puberty and adultery. / ------------------------------------------------- \ \ oO)-. .-(Oo /__ _\ /_ __\ \ \( | ()~() | )/ / \__|\ | (-___-) | /|__/ ' '--' ==`-'== '--' ' |
If you prefers a pony (with color)
1 2 3 |
sudo pacman -S ponysay |
1 2 3 |
ponysay $(fortune) |
Get more fortunes from wikiquote
First, grap some premade ones
1 2 3 |
yaourt -S fortune-mod-all-en |
they will be stored in /use/share/fortune/
Generate fortuen yourself from Wikiquote with wikiquote-fortune
Install wikiquote-fortune
1 2 3 |
yaourt -S wikiquote-fortune |
test it out, with a show name (the part follows after this https://en.wikiquote.org/wiki/):
1 2 3 4 5 |
wikiquote-fortune Inside_Out_\(2015_film\) cat quotes #the defaut name strfile quotes #create .dat file for random access |
Here is somthing I came up with to make life a bit easier
-
Generate show file with your shows
123456cat ./shows----------------------------InsideOut Inside Out (2015 film)InPlainSight In Plain Sight -
Use following script (getquote.sh) to generate the quotes
1234567891011121314151617181920./getquote.sh---------------------------------------------!/usr/bin/sh#get the short showname from file ("shows")showlist=$(cat shows |cut -f 1);mkdir -p fortune;for show in $showlist;dorealshow=$(cat shows |grep "^"$show"" |cut -f 2) &&real_show=$(echo $realshow |sed -e "s: :_:g") &&# retrieve info with wikiquote-fortunewikiquote-fortune $real_show &&# create random access file (quotes.dat) with strfilestrfile quotes &&# rename the file to its showname from default one ("quotes")mv quotes.dat ./fortune/$show.dat &&mv quotes ./fortune/$show ;doneNB: bash -x ./getquote to debug if needed. man fortune for more info.
- Make it excutable chmod +x, try it and enjoy!!
-
copy the quotes and .dat file to user/share/fortune
NB: does not work with quotes other than movies; some other script may be worthy of mention.
Retrieve wikiquote other than movies
The following is a python script I wrote to retrieve quotes from wikiquote.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 |
#!/bin/python ##################################################################### # Scrapting wikiquotes for fortunes. # Usage: get the showname from wikiquote.com, eg. life or philosophy # run 'python this-script', and input the showname when prompted. # which is the piece after https://en.wikiquote.org/wiki/[....] # and copy fortune and its dat file to /usr/share/fortune/ # Author: Y Tan 02/21/2017-08:25:34 PM ##################################################################### import urllib.request import urllib.parse import json import lxml.html import subprocess WORD_BLACKLIST = ['quoted', 'Variant:', 'Retrieved', 'Notes:'] MIN_QUOTE_LEN = 6 MIN_QUOTE_WORDS = 3 MAIN_PAGE = "Main Page" # max_quotes = 10 # prompt for showname subject = input( "Input the subject from wikiquote \nwhich is the piece after https://en.wikiquote.org/wiki/[....]:") url = "https://en.wikiquote.org/w/api.php?format=json" \ "&action=parse&prop=text|categories&disableeditsection&page=" + subject res = urllib.request.urlopen(url) body = res.read().decode() data = json.loads(body) html_content = data['parse']['text']['*'] tree = lxml.html.fromstring(html_content) def is_quote(txt): txt_split = txt.split() invalid_conditions = [ not txt or not txt[0].isupper() or len(txt) < MIN_QUOTE_LEN, len(txt_split) < MIN_QUOTE_WORDS, any(True for word in txt_split if word in WORD_BLACKLIST), txt.endswith(('(', ':', ']')), ] # Returns false if any invalid conditions are true, otherwise returns True. return not any(invalid_conditions) def is_quote_node(node): # Discard nodes with the <small> tag if node.find('small') is not None: return False # Discard nodes that are just a link # (using xpath so lxml will show text nodes) # The link may be inside <i> or <b> tags, so keep peeling layers suspect_node = node while True: node_children = suspect_node.xpath('child::node()') if len(node_children) != 1: break suspect_node = node_children[0] if not isinstance(suspect_node, lxml.etree._Element): break # if suspect_node.tag == 'a': # return False return True quotes_list = [] # Remove table of contents toc_list = tree.xpath('//div[@id="toc"]') for toc in toc_list: toc.getparent().remove(toc) # Scan list items description tags inside description lists. # Also grab headlines to skip some sections. node_list = tree.xpath('//div/ul/li|//div/dl|//h2') # Skip all quotes above the first heading skip_to_next_heading = True for node in node_list: if node.tag != 'h2' and skip_to_next_heading: continue if node.tag == 'h2': skip_to_next_heading = False heading_text = node.text_content().lower() # Commence skipping if heading_text in ('cast', 'see also', 'See also', 'external links'): skip_to_next_heading = True continue # <dl>'s are assumed to be multi-line dialogue if node.tag == 'dl': dds = node.xpath('dd') if not all(is_quote_node(dd) for dd in dds): continue full_dialogue = '\n'.join( dd.text_content().strip() for dd in dds) quotes_list.append(full_dialogue) # if max_quotes == len(quotes_list): # break continue # Handle <li>'s -to extract author and remove it from quotes uls = node.xpath('ul') author = '' for ul in uls: author = ul.text_content().strip() author = ' '.join(author.split()) author = '\n\n\t---\t' + author ul.getparent().remove(ul) if not is_quote_node(node): continue txt = node.text_content().strip() if is_quote(txt): # and max_quotes > len(quotes_list): txt_normal = ' '.join(txt.split()) txt_normal += author quotes_list.append(txt_normal) # quotes_list.append(author) # if max_quotes == len(quotes_list): # global quotes_list1 # quotes_list1 = '\n%\n'.join(quotes_list) # break quotes_list1 = '\n%\n'.join(quotes_list) # print(quotes_list1) with open("/home/tan/org/fortune/" + subject, "w") as text_file: print("%\n: {}\n%".format(quotes_list1), file=text_file) subprocess.call(["strfile", "/home/tan/org/fortune/" + subject]) |
OCT
About the Author:
Beyond 8 hours - Computer, Sports, Family...