ibeis.algo.hots package

Submodules

ibeis.algo.hots._grave_bayes module

Arc reversal http://www.cs.toronto.edu/~cebly/Papers/simulation.pdf

TODO:

Need to find faster more mature libraries http://dlib.net/bayes.html http://www.cs.waikato.ac.nz/ml/weka/ http://www.cs.waikato.ac.nz/~remco/weka.bn.pdf https://code.google.com/p/pebl-project/ https://github.com/abhik/pebl http://www.cs.ubc.ca/~murphyk/Software/bnsoft.html

Demo case where we think we know the labels of others. Only one unknown name. Need to classify it as one of the other known names.

References

https://en.wikipedia.org/wiki/Bayesian_network https://class.coursera.org/pgm-003/lecture/17 http://www.cs.ubc.ca/~murphyk/Bayes/bnintro.html http://www3.cs.stonybrook.edu/~sael/teaching/cse537/Slides/chapter14d_BP.pdf http://www.cse.unsw.edu.au/~cs9417ml/Bayes/Pages/PearlPropagation.html https://github.com/pgmpy/pgmpy.git http://pgmpy.readthedocs.org/en/latest/ http://nipy.bic.berkeley.edu:5000/download/11 http://pgmpy.readthedocs.org/en/latest/wiki.html#add-feature-to-accept-and-output-state-names-for-models http://www.csse.monash.edu.au/bai/book/BAI_Chapter2.pdf

Clustering with CRF:
http://srl.informatik.uni-freiburg.de/publicationsdir/tipaldiIROS09.pdf http://www.dis.uniroma1.it/~dottoratoii/media/students/documents/thesis_tipaldi.pdf An Unsupervised Conditional Random Fields Approach for Clustering Gene Expression Time Series http://bioinformatics.oxfordjournals.org/content/24/21/2467.full
CRFs:
http://homepages.inf.ed.ac.uk/csutton/publications/crftutv2.pdf
AlphaBeta Swap:

https://github.com/amueller/gco_python https://github.com/pmneila/PyMaxflow http://www.cs.cornell.edu/rdz/papers/bvz-iccv99.pdf

http://arxiv.org/pdf/1411.6340.pdf Iteratively Reweighted Graph Cut for Multi-label MRFs with Non-convex Priors

Fusion Moves:
http://www.robots.ox.ac.uk/~vilem/fusion.pdf http://hci.iwr.uni-heidelberg.de/publications/mip/techrep/beier_15_fusion.pdf

Consensus Clustering

Explaining Away

Course Notes:

Tie breaking for MAP assignment. https://class.coursera.org/pgm-003/lecture/60 * random perdibiation

Correspondence Problem is discussed in https://class.coursera.org/pgm-003/lecture/68

Sparse Pattern Factors

Collective Inference: Plate Models / Aggragator CPD is used to define dependencies.

ibeis.algo.hots._grave_bayes.flow()[source]

http://pmneila.github.io/PyMaxflow/maxflow.html#maxflow-fastmin

pip install PyMaxFlow pip install pystruct pip install hdbscan

ibeis.algo.hots._grave_bayes.make_name_model(num_annots, num_names=None, verbose=True, mode=1)[source]

Defines the general name model

CommandLine:

python -m ibeis.algo.hots.bayes --exec-make_name_model --show

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.bayes import *  # NOQA
>>> defaults = dict(num_annots=2, num_names=2, verbose=True, mode=2)
>>> kw = ut.argparse_funckw(make_name_model, defaults)
>>> model = make_name_model(**kw)
>>> ut.quit_if_noshow()
>>> show_model(model, show_prior=True)
>>> ut.show_if_requested()
ibeis.algo.hots._grave_bayes.name_model_mode1(num_annots, num_names=None, verbose=True)[source]

spaghettii

CommandLine:

python -m ibeis.algo.hots.bayes --exec-name_model_mode1 --show
python -m ibeis.algo.hots.bayes --exec-name_model_mode1
python -m ibeis.algo.hots.bayes --exec-name_model_mode1 --num-annots=3

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.bayes import *  # NOQA
>>> defaults = dict(num_annots=2, num_names=2, verbose=True)
>>> kw = ut.argparse_funckw(name_model_mode1, defaults)
>>> model = name_model_mode1(**kw)
>>> ut.quit_if_noshow()
>>> show_model(model, show_prior=False, show_title=False)
>>> ut.show_if_requested()
ibeis.algo.hots._grave_bayes.name_model_mode5(num_annots, num_names=None, verbose=True, mode=1)[source]
ibeis.algo.hots._grave_bayes.show_model(model, evidence={}, soft_evidence={}, **kwargs)[source]

References

http://stackoverflow.com/questions/22207802/pygraphviz-networkx-set-node-level-or-layer

ibeis.algo.hots._grave_bayes.try_query(model, infr, evidence, interest_ttypes=[], verbose=True)[source]

CommandLine:

python -m ibeis.algo.hots.bayes --exec-try_query --show

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.bayes import *  # NOQA
>>> verbose = True
>>> other_evidence = {}
>>> name_evidence = [1, None, 0, None]
>>> score_evidence = ['high', 'low', 'low']
>>> query_vars = None
>>> model = make_name_model(num_annots=4, num_names=4, verbose=True, mode=1)
>>> model, evidence, soft_evidence = update_model_evidence(model, name_evidence, score_evidence, other_evidence)
>>> interest_ttypes = ['name']
>>> infr = pgmpy.inference.BeliefPropagation(model)
>>> evidence = infr._ensure_internal_evidence(evidence, model)
>>> query_results = try_query(model, infr, evidence, interest_ttypes, verbose)
>>> result = ('query_results = %s' % (str(query_results),))
>>> ut.quit_if_noshow()
>>> show_model(model, show_prior=True, **query_results)
>>> ut.show_if_requested()

ibeis.algo.hots._grave_scorenorm module

ibeis.algo.hots._neighbor_experiment module

ibeis.algo.hots._neighbor_experiment.augment_nnindexer_experiment()[source]

References

http://answers.opencv.org/question/44592/flann-index-training-fails-with-segfault/

CommandLine:

utprof.py -m ibeis.algo.hots._neighbor_experiment --test-augment_nnindexer_experiment
python -m ibeis.algo.hots._neighbor_experiment --test-augment_nnindexer_experiment

python -m ibeis.algo.hots._neighbor_experiment --test-augment_nnindexer_experiment --db PZ_MTEST --diskshow --adjust=.1 --save "augment_experiment_{db}.png" --dpath='.' --dpi=180 --figsize=9,6
python -m ibeis.algo.hots._neighbor_experiment --test-augment_nnindexer_experiment --db PZ_Master0 --diskshow --adjust=.1 --save "augment_experiment_{db}.png" --dpath='.' --dpi=180 --figsize=9,6 --nosave-flann --show
python -m ibeis.algo.hots._neighbor_experiment --test-augment_nnindexer_experiment --db PZ_Master0 --diskshow --adjust=.1 --save "augment_experiment_{db}.png" --dpath='.' --dpi=180 --figsize=9,6 --nosave-flann --show


python -m ibeis.algo.hots._neighbor_experiment --test-augment_nnindexer_experiment --db PZ_Master0 --diskshow --adjust=.1 --save "augment_experiment_{db}.png" --dpath='.' --dpi=180 --figsize=9,6 --nosave-flann --no-api-cache --nocache-uuids

python -m ibeis.algo.hots._neighbor_experiment --test-augment_nnindexer_experiment --db PZ_MTEST --show
python -m ibeis.algo.hots._neighbor_experiment --test-augment_nnindexer_experiment --db PZ_Master0 --show

# RUNS THE SEGFAULTING CASE
python -m ibeis.algo.hots._neighbor_experiment --test-augment_nnindexer_experiment --db PZ_Master0 --show
# Debug it
gdb python
run -m ibeis.algo.hots._neighbor_experiment --test-augment_nnindexer_experiment --db PZ_Master0 --show
gdb python
run -m ibeis.algo.hots._neighbor_experiment --test-augment_nnindexer_experiment --db PZ_Master0 --diskshow --adjust=.1 --save "augment_experiment_{db}.png" --dpath='.' --dpi=180 --figsize=9,6

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots._neighbor_experiment import *  # NOQA
>>> # execute function
>>> augment_nnindexer_experiment()
>>> # verify results
>>> ut.show_if_requested()
ibeis.algo.hots._neighbor_experiment.flann_add_time_experiment()[source]

builds plot of number of annotations vs indexer build time.

TODO: time experiment

CommandLine:

python -m ibeis.algo.hots._neighbor_experiment --test-flann_add_time_experiment --db PZ_MTEST --show
python -m ibeis.algo.hots._neighbor_experiment --test-flann_add_time_experiment --db PZ_Master0 --show
utprof.py -m ibeis.algo.hots._neighbor_experiment --test-flann_add_time_experiment --show

valgrind --tool=memcheck --suppressions=valgrind-python.supp python -m ibeis.algo.hots._neighbor_experiment --test-flann_add_time_experiment --db PZ_MTEST --no-with-reindex

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots._neighbor_experiment import *  # NOQA
>>> import ibeis
>>> #ibs = ibeis.opendb('PZ_MTEST')
>>> result = flann_add_time_experiment()
>>> # verify results
>>> print(result)
>>> ut.show_if_requested()
ibeis.algo.hots._neighbor_experiment.pyflann_remove_and_save()[source]

References

# Logic goes here ~/code/flann/src/cpp/flann/algorithms/kdtree_index.h

~/code/flann/src/cpp/flann/util/serialization.h ~/code/flann/src/cpp/flann/util/dynamic_bitset.h

# Bindings go here ~/code/flann/src/cpp/flann/flann.cpp ~/code/flann/src/cpp/flann/flann.h

# Contains stuff for the flann namespace like flann::log_level # Also has Index with # Matrix<ElementType> features; SEEMS USEFUL ~/code/flann/src/cpp/flann/flann.hpp

# Wrappers go here ~/code/flann/src/python/pyflann/flann_ctypes.py ~/code/flann/src/python/pyflann/index.py

~/local/build_scripts/flannscripts/autogen_bindings.py

Greping:
cd ~/code/flann/src grep -ER cleanRemovedPoints * grep -ER removed_points_ *

CommandLine:

python -m ibeis.algo.hots._neighbor_experiment --exec-pyflann_remove_and_save

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots._neighbor_experiment import *  # NOQA
>>> pyflann_remove_and_save()
ibeis.algo.hots._neighbor_experiment.pyflann_test_remove_add()[source]

CommandLine:

python -m ibeis.algo.hots._neighbor_experiment --exec-pyflann_test_remove_add

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots._neighbor_experiment import *  # NOQA
>>> pyflann_test_remove_add()
ibeis.algo.hots._neighbor_experiment.pyflann_test_remove_add2()[source]

CommandLine:

python -m ibeis.algo.hots._neighbor_experiment --exec-pyflann_test_remove_add2

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots._neighbor_experiment import *  # NOQA
>>> pyflann_test_remove_add2()
ibeis.algo.hots._neighbor_experiment.subindexer_time_experiment()[source]

builds plot of number of annotations vs indexer build time.

TODO: time experiment

ibeis.algo.hots._neighbor_experiment.test_incremental_add(ibs)[source]
Parameters:ibs (IBEISController) –

CommandLine:

python -m ibeis.algo.hots._neighbor_experiment --test-test_incremental_add

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.neighbor_index_cache import *  # NOQA
>>> import ibeis
>>> ibs = ibeis.opendb('PZ_MTEST')
>>> result = test_incremental_add(ibs)
>>> print(result)
ibeis.algo.hots._neighbor_experiment.test_multiple_add_removes()[source]

CommandLine:

python -m ibeis.algo.hots._neighbor_experiment --exec-test_multiple_add_removes

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots._neighbor_experiment import *  # NOQA
>>> result = test_multiple_add_removes()
>>> print(result)

ibeis.algo.hots._pipeline_helpers module

ibeis.algo.hots._pipeline_helpers.print_nearest_neighbor_assignments(qvecs_list, nns_list)[source]
ibeis.algo.hots._pipeline_helpers.testdata_matching(*args, **kwargs)[source]
>>> from ibeis.algo.hots._pipeline_helpers import *  # NOQA
ibeis.algo.hots._pipeline_helpers.testdata_post_sver(defaultdb=u'PZ_MTEST', qaid_list=None, daid_list=None, codename=u'vsmany', cfgdict=None)[source]
>>> from ibeis.algo.hots._pipeline_helpers import *  # NOQA
ibeis.algo.hots._pipeline_helpers.testdata_pre(stopnode, defaultdb=u'testdb1', p=[u'default'], a=[u'default:qindex=0:1,dindex=0:5'], **kwargs)[source]

New (1-1-2016) generic pipeline node testdata getter

Parameters:
  • stopnode (str) –
  • defaultdb (str) – (default = u’testdb1’)
  • p (list) – (default = [u’default:’])
  • a (list) – (default = [u’default:qsize=1,dsize=4’])
Returns:

(ibs, qreq_, args)

Return type:

tuple

CommandLine:

python -m ibeis.algo.hots._pipeline_helpers --exec-testdata_pre --show

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots._pipeline_helpers import *  # NOQA
>>> stopnode = 'build_chipmatches'
>>> defaultdb = 'testdb1'
>>> p = ['default:']
>>> a = ['default:qindex=0:1,dindex=0:5']
>>> qreq_, args = testdata_pre(stopnode, defaultdb, p, a)
ibeis.algo.hots._pipeline_helpers.testdata_pre_baselinefilter(defaultdb=u'testdb1', qaid_list=None, daid_list=None, codename=u'vsmany')[source]
ibeis.algo.hots._pipeline_helpers.testdata_pre_sver(defaultdb=u'PZ_MTEST', qaid_list=None, daid_list=None)[source]
>>> from ibeis.algo.hots._pipeline_helpers import *  # NOQA
ibeis.algo.hots._pipeline_helpers.testdata_pre_vsonerr(defaultdb=u'PZ_MTEST', qaid_list=[1], daid_list=u'all')[source]
>>> from ibeis.algo.hots._pipeline_helpers import *  # NOQA
ibeis.algo.hots._pipeline_helpers.testdata_pre_weight_neighbors(defaultdb=u'testdb1', qaid_list=[1, 2], daid_list=None, codename=u'vsmany', cfgdict=None)[source]
TODO: replace testdata_pre_weight_neighbors with
>>> qreq_, args = plh.testdata_pre('weight_neighbors', defaultdb='testdb1',
>>>                                a=['default:qindex=0:1,dindex=0:5,hackerrors=False'],
>>>                                p=['default:codename=vsmany,bar_l2_on=True,fg_on=False'], verbose=True)
ibeis.algo.hots._pipeline_helpers.testdata_scoring(defaultdb=u'PZ_MTEST', qaid_list=[1], daid_list=u'all')[source]
ibeis.algo.hots._pipeline_helpers.testdata_sparse_matchinfo_nonagg(defaultdb=u'testdb1', p=[u'default'])[source]
ibeis.algo.hots._pipeline_helpers.testrun_pipeline_upto(qreq_, stop_node=None, verbose=True)[source]

Main tester function. Runs the pipeline by mirroring request_ibeis_query_L0, but stops at a requested breakpoint and returns the local variables.

convinience: runs pipeline for tests this should mirror request_ibeis_query_L0

ibeis.algo.hots.automatch_suggestor module

Reports decisions and confidences about names (identifications) and exemplars using query results objects.

class ibeis.algo.hots.automatch_suggestor.ChoiceTuple(sorted_nids, sorted_nscore, sorted_rawscore, sorted_aids, sorted_ascores)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

sorted_aids

Alias for field number 3

sorted_ascores

Alias for field number 4

sorted_nids

Alias for field number 0

sorted_nscore

Alias for field number 1

sorted_rawscore

Alias for field number 2

class ibeis.algo.hots.automatch_suggestor.ExemplarDecision(new_exemplar_aids, remove_exemplar_aids)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

new_exemplar_aids

Alias for field number 0

remove_exemplar_aids

Alias for field number 1

ibeis.algo.hots.automatch_suggestor.exemplar_method1_distinctiveness(ibs, qaid, other_exemplars)[source]

choose as exemplar if it is distinctive with respect to other exemplars

ibeis.algo.hots.automatch_suggestor.exemplar_method2_randomness(qaid, other_exemplars)[source]

CommandLine:

python -m ibeis.algo.hots.automatch_suggestor --test-exemplar_method2_randomness

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.automatch_suggestor import *  # NOQA
>>> # build test data
>>> random.seed(0)
>>> qaid = 4
>>> other_exemplars = [1, 2, 3, 5, 6, 9]
>>> # execute function
>>> exemplar_decision = exemplar_method2_randomness(qaid, other_exemplars)
>>> exemplar_decision_list = [exemplar_method2_randomness(qaid, other_exemplars) for _ in range(1000)]
>>> # verify results
>>> flat_others = ut.flatten(ut.get_list_column(exemplar_decision_list, 1))
>>> result = str(flat_others)
>>> print(result)
[3, 2, 6, 1, 3, 2, 9, 3, 6, 2, 1, 5, 1, 6, 1]
ibeis.algo.hots.automatch_suggestor.get_qres_name_choices(ibs, cm)[source]

returns all possible decision a user could make

TODO: Return the possiblity of a merge. TODO: Ensure that the total probability of each possible choice sums to 1. This will define a probability density function that we can take advantage of

Parameters:
  • ibs (IBEISController) – ibeis controller object
  • cm (QueryResult) – object of feature correspondences and scores
Returns:

choicetup

Return type:

ChoiceTuple

CommandLine:

python -m ibeis.algo.hots.automatch_suggestor --test-get_qres_name_choices:0

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.automatch_suggestor import *  # NOQA
>>> import ibeis  # NOQA
>>> # build test data
>>> cm, qreq_ = ibeis.testdata_cm()
>>> ibs = qreq_.ibs
>>> choicetup = get_qres_name_choices(ibs, cm)
>>> print(choicetup)
>>> result = ut.numpy_str(choicetup.sorted_nids[0:1], force_dtype=False)
>>> print(result)
np.array([1])
ibeis.algo.hots.automatch_suggestor.get_system_exemplar_suggestion(ibs, qaid)[source]

hotspotter returns an exemplar suggestion

TODO:
do a vsone query between all of the exemplars to see if this one is good enough to be added.
TODO:
build a complete graph of exemplar scores and only add if this one is lower than any other edge
SeeAlso:
ibsfuncs.set_exemplars_from_quality_and_viewpoint

CommandLine:

python -m ibeis.algo.hots.automatch_suggestor --test-get_system_exemplar_suggestion:1

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.automatch_suggestor import *  # NOQA
>>> import ibeis
>>> # build test data
>>> ibs = ibeis.opendb('testdb1')
>>> qaid = 2
>>> # execute function
>>> (autoexmplr_msg, exemplar_decision, exemplar_confidence) = get_system_exemplar_suggestion(ibs, qaid)
>>> # verify results
>>> result = str((autoexmplr_msg, exemplar_decision, exemplar_confidence))
>>> print(result)

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.automatch_suggestor import *  # NOQA
>>> import ibeis
>>> # build test data
>>> ibs = ibeis.opendb('testdb1')
>>> qaid = 1
>>> # execute function
>>> (autoexmplr_msg, exemplar_decision, exemplar_confidence) = get_system_exemplar_suggestion(ibs, qaid)
>>> # verify results
>>> result = str((autoexmplr_msg, exemplar_decision, exemplar_confidence))
>>> print(result)
ibeis.algo.hots.automatch_suggestor.get_system_name_suggestion(ibs, choicetup)[source]

Suggests a decision based on the current choices

Parameters:
  • ibs (IBEISController) –
  • qaid (int) – query annotation id
  • cm (QueryResult) – object of feature correspondences and scores
  • metatup (None) –
Returns:

(autoname_msg, autoname_func)

Return type:

tuple

CommandLine:

python -m ibeis.algo.hots.automated_matcher --test-test_incremental_queries:0
python -m ibeis.algo.hots.automated_matcher --test-test_incremental_queries:1
python -m ibeis.algo.hots.automated_matcher --test-get_system_name_suggestion

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.automatch_suggestor import *  # NOQA
>>> import ibeis
>>> # build test data
>>> cm, qreq_ = ibeis.testdata_cm()
>>> ibs = qreq_.ibs
>>> choicetup = get_qres_name_choices(ibs, cm)
>>> (autoname_msg, name, name_confidence) = get_system_name_suggestion(ibs, choicetup)
>>> # verify results
>>> result = str((autoname_msg, name, name_confidence))
>>> print(result)

ibeis.algo.hots.automated_helpers module

Idea:
what about the probability of a descriptor match being a score like in SIFT. we can learn that too.
Have:
  • semantic and visual uuids
  • Test that accepts unknown annotations one at a time and for each runs query, makes decision about name, and executes decision.
  • As a placeholder for exemplar decisions an exemplar is added if number of exemplars per name is less than threshold.
  • vs-one reranking query mode
  • test harness but start with larger test set
  • vs-one score normalizer ~~/ score normalizer for different values of K * / different params~~ vs-many score normalization doesnt actually matter. We just need the ranking.
  • need to add in the multi-indexer code into the pipeline. Need to decide which subindexers to load given a set of daids
  • need to use set query as an exemplar if its vs-one reranking scores are below a threshold
  • flip the vsone ratio score so its < .8 rather than > 1.2 or whatever
  • start from nothing and let the system make the first few decisions correctly
  • tell me the correct answer in the automated test
  • turn on multi-indexing. (should just work..., probably bugs though. Just need to throw the switch)
  • paramater to only add exemplar if post-normlized score is above a threshold
  • ensure vsone ratio test is happening correctly
  • normalization gets a cfgstr based on the query
  • need to allow for scores to be un-invalidatd post spatial verification e.g. when the first match initially is invalidated through spatial verification but the next matches survive.
  • keep distinctiveness weights from vsmany for vsone weighting basically involves keeping weights from different filters and not aggregating match weights until the end.
  • Put test query mode into the main application and work on the interface for it.
  • add matches to multiple animals (merge)
  • update normalizer (have setup the datastructure to allow for it need to integrate it seemlessly)
  • score normalization update. on add the new support data, reapply bayes
rule, and save to the current cache for a given algorithm configuration.
  • spawn background process to reindex chunks of data
TODO:
  • Improve vsone scoring.
  • test case where there is a 360 view that is linkable from the tests case
  • ~~Remember name_confidence of decisions for manual review~~ Defer

Tasks:

Algorithm::
  • Incremental query needs to handle
    • test mode and live mode

    • normalizer update

    • use correct distinctivenes score in vsone

    • tested application of distinctiveness, foreground, ratio,

      spatial_verification, vsone verification, and score normalization.

  • Mathematically formal description of the space of choices
    • getting the proability of each choice will give us a much better

      confidence measure for our decision. An example of a probability partition might be .2 - merge with rank1. .2 merge with rank 2, .5 merge with rank1 and rank2, .1 others

  • Improved automated exemplar decision mechanism

  • Improved automated name decision mechanism

SQL::
  • New Image Columns
    • image_posix_timedelta
  • New Name Columns
    • name_temp_flag
    • name_alias_text
    • name_uuid
    • name_visual_uuid
    • name_member_annot_rowids_evalstr
    • name_member_num_annot_rowids
  • New ImageSet Columns
    • imageset_start_time
    • imageset_end_time
    • imageset_lat
    • imageset_lon
    • imageset_processed_flag
    • imageset_shipped_flag
Decision UIs::
  • Query versus top N results
    • ability to draw an undirected edge between the query and any number of

      results. ie create a match any of the top results

    • a match to more than one results should by default merge the two names

      (this involves a name enhancement subtask). trigger a split / merge dialog

  • Is Exemplar
    • allows for user to set the exemplars for a given name
  • Name Progress
    • Shows the current name matching progress
  • Split
    • Allows a user to split off some images from a name into a new name or some other name.
  • Merge
    • Allows a user to join two names.
GUI::
  • NameTree needs to not refresh unless absolutely necessary

  • Time Sync

  • ImageSet metadata sync from the SMART

  • Hide shipped imagesets
    • put flag to turn them on
  • Mark processed imagesets

  • Gui naturally ensures that all annotations in the query belong

    to the same species

  • Garbage collection function that removes all non-exemplar information from imagesets that have been shipped.

  • Spawn process that reindexes large chunks of descriptors as the database grows.

LONG TERM TASKS:

Architecture:
  • Pipeline needs
    • DEFER: a move from dict based representation to list based
    • DEFER: spatial verification cyth speedup
    • DEFER: nearest neighbor (based on visual uuid caching) caching
Controller:
  • LONGTERM: AutogenController
    • register data convertors for verts / other eval columns. Make several convertors standard and we can tag those columns to autogenerate their functions.
    • be able to mark a column as determined by the aggregate of other columns. Then the data is either generated on the fly, or it is cached and the necessary book-keeping functions are autogenerated.
Decision UIs::
  • Is Exemplar
    • LONG TERM: it would be cool if they were visualized by using networkx or some gephi like program and clustered by match score.
ibeis.algo.hots.automated_helpers.add_annot_chunk(ibs_gt, ibs2, aids_chunk1, aid1_to_aid2)[source]

adds annotations to the tempoarary database and prevents duplicate additions.

aids_chunk1 = aid_list1

Parameters:
Returns:

aids_chunk2

Return type:

list

ibeis.algo.hots.automated_helpers.annot_testdb_consistency_checks(ibs_gt, ibs2, aid_list1, aid_list2)[source]
ibeis.algo.hots.automated_helpers.assert_testdb_annot_consistency(ibs_gt, ibs2, aid_list1, aid_list2)[source]

just tests uuids

if anything goes wrong this should fix it:
from ibeis.other import ibsfuncs aid_list1 = ibs_gt.get_valid_aids() ibs_gt.update_annot_visual_uuids(aid_list1) ibs2.update_annot_visual_uuids(aid_list2) ibsfuncs.fix_remove_visual_dupliate_annotations(ibs_gt)
ibeis.algo.hots.automated_helpers.check_results(ibs_gt, ibs2, aid1_to_aid2, aids_list1_, incinfo)[source]

reports how well the incremental query ran when the oracle was calling the shots.

ibeis.algo.hots.automated_helpers.ensure_testdb_clean_data(ibs_gt, ibs2, aid_list1, aid_list2)[source]

removes previously set names and exemplars

ibeis.algo.hots.automated_helpers.interactive_commandline_prompt(msg, decisiontype)[source]
ibeis.algo.hots.automated_helpers.make_incremental_test_database(ibs_gt, aid_list1, reset)[source]

Makes test database. adds image and annotations but does not transfer names. if reset is true the new database is gaurenteed to be built from a fresh start.

Parameters:
  • ibs_gt (IBEISController) –
  • aid_list1 (list) –
  • reset (bool) – if True the test database is completely rebuilt
Returns:

ibs2

Return type:

IBEISController

ibeis.algo.hots.automated_helpers.setup_incremental_test(ibs_gt, clear_names=True, aid_order=u'shuffle')[source]

CommandLine:

python -m ibeis.algo.hots.automated_helpers --test-setup_incremental_test:0

python dev.py -t custom --cfg codename:vsone_unnorm --db PZ_MTEST --allgt --vf --va
python dev.py -t custom --cfg codename:vsone_unnorm --db PZ_MTEST --allgt --vf --va --index 0 4 8 --verbose

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.automated_helpers import *  # NOQA
>>> import ibeis # NOQA
>>> ibs_gt = ibeis.opendb('PZ_MTEST')
>>> ibs2, aid_list1, aid1_to_aid2 = setup_incremental_test(ibs_gt)

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.automated_helpers import *  # NOQA
>>> import ibeis  # NOQA
>>> ibs_gt = ibeis.opendb('GZ_ALL')
>>> ibs2, aid_list1, aid1_to_aid2 = setup_incremental_test(ibs_gt)

ibeis.algo.hots.automated_matcher module

CommandLine:

python -c "import utool as ut; ut.write_modscript_alias('Tinc.sh', 'ibeis.algo.hots.qt_inc_automatch')"

sh Tinc.sh --test-test_inc_query:0
sh Tinc.sh --test-test_inc_query:1
sh Tinc.sh --test-test_inc_query:2
sh Tinc.sh --test-test_inc_query:3 --num-initial 5000

python -m ibeis.algo.hots.qt_inc_automatch --test-test_inc_query:0
class ibeis.algo.hots.automated_matcher.Metatup(ibs_gt, aid1_to_aid2)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

aid1_to_aid2

Alias for field number 1

ibs_gt

Alias for field number 0

ibeis.algo.hots.automated_matcher.exec_exemplar_decision_and_continue(exemplar_decision, ibs, cm, qreq_, incinfo=None)[source]

DECISION STEP 4)

The exemplar decision in the previous step is executed. The persistant vsmany query request is updated if needbe and the execution continues. (currently to the end of this iteration)

ibeis.algo.hots.automated_matcher.exec_name_decision_and_continue(chosen_names, ibs, cm, qreq_, incinfo=None)[source]

DECISION STEP 2)

The name decision from the previous step is executed and the score normalizer is updated. Then execution continues to the exemplar decision step.

ibeis.algo.hots.automated_matcher.execute_query_batch(ibs, qaid_chunk, qreq_vsmany_, incinfo)[source]

TODO: remove special query

ibeis.algo.hots.automated_matcher.generate_incremental_queries(ibs, qaid_list, incinfo=None)[source]

qt entry point. generates query results for the qt harness to process.

Parameters:

CommandLine:

python -m ibeis.algo.hots.automated_matcher --test-generate_incremental_queries

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.automated_matcher import *  # NOQA
>>> ibs, qaid_chunk = testdata_automatch()
>>> generate_incremental_queries(ibs, qaid_list)
ibeis.algo.hots.automated_matcher.generate_subquery_steps(ibs, qaid_chunk, incinfo=None)[source]

Generats query results for the qt harness to then send into the next decision steps.

Parameters:
  • ibs (IBEISController) – ibeis controller object
  • qaid_chunk
  • incinfo (dict) –

CommandLine:

python -m ibeis.algo.hots.automated_matcher --test-generate_subquery_steps

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.automated_matcher import *  # NOQA
>>> ibs, qaid_chunk = testdata_automatch()
>>> generate_subquery_steps(ibs, qaid_chunk)
ibeis.algo.hots.automated_matcher.get_name_suggestion(ibs, qaid, choicetup, incinfo)[source]
ibeis.algo.hots.automated_matcher.initialize_persistant_query_request(ibs, qaid_chunk)[source]
ibeis.algo.hots.automated_matcher.load_or_make_qreq(ibs, qreq_vsmany_, qaid_chunk)[source]
ibeis.algo.hots.automated_matcher.run_until_exemplar_decision_signal(ibs, cm, qreq_, incinfo=None)[source]

DECISION STEP 3)

Either the system or the user decides if the query should be added to the database as an exemplar.

ibeis.algo.hots.automated_matcher.run_until_finish(incinfo=None)[source]

DECISION STEP 5)

ibeis.algo.hots.automated_matcher.run_until_name_decision_signal(ibs, cm, qreq_, incinfo=None)[source]

DECISION STEP 1)

Either the system or the user makes a decision about the name of the query annotation.

CommandLine:

python -m ibeis.algo.hots.automated_matcher --test-run_until_name_decision_signal

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.automated_matcher import *  # NOQA
>>> ibs, qaid_chunk = testdata_automatch()
>>> exemplar_aids = ibs.get_valid_aids(is_exemplar=True)
>>> incinfo = {}
>>> gen = generate_subquery_steps(ibs, qaid_chunk, incinfo)
>>> item = six.next(gen)
>>> ibs, cm, qreq_, incinfo = item
>>> # verify results
>>> run_until_name_decision_signal(ibs, cm, qreq_, incinfo)
Ignore::
cm.ishow_top(ibs, sidebyside=False, show_query=True)
ibeis.algo.hots.automated_matcher.test_generate_incremental_queries(ibs_gt, ibs, aid_list1, aid1_to_aid2, num_initial=0, incinfo=None)[source]

TODO: move this somewhere else Testing function

Adds and queries new annotations one at a time with oracle guidance ibs1 is ibs_gt, ibs2 is ibs

ibeis.algo.hots.automated_matcher.testdata_automatch(dbname=None)[source]
ibeis.algo.hots.automated_matcher.update_normalizer(ibs, cm, qreq_, chosen_names)[source]

adds new support data to the current normalizer

FIXME: broken

Parameters:
  • ibs (IBEISController) – ibeis controller object
  • qreq (QueryRequest) – query request object with hyper-parameters
  • choicetup
  • name
Returns:

(tp_rawscore, tn_rawscore)

Return type:

tuple

CommandLine:

python -m ibeis.algo.hots.automated_matcher --test-update_normalizer

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.automated_matcher import *  # NOQA
>>> ibs, qaid_chunk = testdata_automatch()
>>> exemplar_aids = ibs.get_valid_aids(is_exemplar=True)
>>> incinfo = {}
>>> gen = generate_subquery_steps(ibs, qaid_chunk, incinfo)
>>> item = six.next(gen)
>>> ibs, cm, qreq_, incinfo = item
>>> qreq_.load_score_normalizer()
>>> # verify results
>>> chosen_names = ['easy']
>>> update_normalizer(ibs, cm, qreq_, chosen_names)

ibeis.algo.hots.automated_oracle module

module for making the correct automatic decisions in incremental tests

ibeis.algo.hots.automated_oracle.get_oracle_name_decision(metatup, ibs, qaid, choicetup, oracle_method=1)[source]

Find what the correct decision should be ibs is the database we are working with ibs_gt has pristine groundtruth

ibeis.algo.hots.automated_oracle.get_oracle_name_suggestion(ibs, qaid, choicetup, metatup)[source]

main entry point for the oracle

ibeis.algo.hots.automated_oracle.oracle_method1(ibs_gt, ibs, qnid1, aid_list2, aid2_to_aid1, sorted_nids, MAX_LOOK)[source]

METHOD 1: MAKE BEST DECISION FROM GIVEN INFORMATION

ibeis.algo.hots.automated_oracle.oracle_method2(ibs_gt, qnid1)[source]

METHOD 2: MAKE THE ABSOLUTE CORRECT DECISION REGARDLESS OF RESULT

ibeis.algo.hots.automated_params module

module that specified how we choose paramaters based on current search database properties

ibeis.algo.hots.automated_params.choose_vsmany_K(num_names, qaids, daids)[source]

TODO: Should also scale up the number of checks as well

method for choosing K in the initial vsmany queries

ibeis.algo.hots.bayes module

  1. Ambiguity / num names
  2. independence of annotations
  3. continuous
  4. exponential case
  5. speicifc examples of our prob
  6. human in loop
ibeis.algo.hots.bayes.cluster_query(model, query_vars=None, evidence=None, soft_evidence=None, method=None, operation=u'maximize')[source]

CommandLine:

python -m ibeis.algo.hots.bayes --exec-cluster_query --show
GridParams:
>>> param_grid = dict(
>>>     #method=['approx', 'bf', 'bp'],
>>>     method=['approx', 'bp'],
>>> )
>>> combos = ut.all_dict_combinations(param_grid)
>>> index = 0
>>> keys = 'method'.split(', ')
>>> method, = ut.dict_take(combos[index], keys)
GridSetup:
>>> from ibeis.algo.hots.bayes import *  # NOQA
>>> verbose = True
>>> other_evidence = {}
>>> name_evidence = [1, None, None, 0]
>>> score_evidence = [2, 0, 2]
>>> special_names = ['fred', 'sue', 'tom', 'paul']
>>> model = make_name_model(
>>>     num_annots=4, num_names=4, num_scores=3, verbose=True, mode=1,
>>>     special_names=special_names)
>>> method = None
>>> model, evidence, soft_evidence = update_model_evidence(
>>>     model, name_evidence, score_evidence, other_evidence)
>>> evidence = model._ensure_internal_evidence(evidence)
>>> query_vars = ut.list_getattr(model.ttype2_cpds[NAME_TTYPE], 'variable')
GridExample:
>>> # DISABLE_DOCTEST
>>> query_results = cluster_query(model, query_vars, evidence,
>>>                               method=method)
>>> print(ut.repr2(query_results['top_assignments'], nl=1))
>>> ut.quit_if_noshow()
>>> pgm_viz.show_model(model, evidence=evidence, **query_results)
>>> ut.show_if_requested()
ibeis.algo.hots.bayes.collapse_factor_labels(model, reduced_joint, evidence)[source]
ibeis.algo.hots.bayes.collapse_labels(model, evidence, reduced_variables, reduced_row_idxs, reduced_values)[source]
ibeis.algo.hots.bayes.compute_reduced_joint(model, query_vars, evidence, method, operation=u'maximize')[source]
ibeis.algo.hots.bayes.draw_tree_model(model, **kwargs)[source]
ibeis.algo.hots.bayes.get_hacked_pos(netx_graph, name_nodes=None, prog=u'dot')[source]
ibeis.algo.hots.bayes.make_name_model(num_annots, num_names=None, verbose=True, mode=1, num_scores=2, p_score_given_same=None, hack_score_only=False, score_basis=None, special_names=None)[source]

CommandLine:

python -m ibeis.algo.hots.bayes --exec-make_name_model --no-cnn
python -m ibeis.algo.hots.bayes --exec-make_name_model --show --no-cnn
python -m ibeis.algo.hots.bayes --exec-make_name_model --num-annots=3

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.bayes import *  # NOQA
>>> defaults = dict(num_annots=2, num_names=2, verbose=True)
>>> modeltype = ut.get_argval('--modeltype', default='bayes')
>>> kw = ut.argparse_funckw(make_name_model, defaults)
>>> model = make_name_model(**kw)
>>> ut.quit_if_noshow()
>>> model.show_model(show_prior=False, show_title=False, modeltype=modeltype)
>>> ut.show_if_requested()
ibeis.algo.hots.bayes.make_temp_state(state)[source]
ibeis.algo.hots.bayes.reduce_marginalize(phi, query_variables=None, evidence={}, inplace=False)[source]

Hack for reduction followed by marginalization

Example

>>> reduced_joint = joint.observe(
>>>     query_variables, evidence, inplace=False)
>>> new_rows = reduced_joint._row_labels()
>>> new_vals = reduced_joint.values.ravel()
>>> map_vals = new_rows[new_vals.argmax()]
>>> map_assign = dict(zip(reduced_joint.variables, map_vals))
ibeis.algo.hots.bayes.report_partitioning_statistics(new_reduced_joint)[source]
ibeis.algo.hots.bayes.show_model(model, evidence={}, soft_evidence={}, **kwargs)[source]

References

http://stackoverflow.com/questions/22207802/pygraphviz-networkx-set-node-level-or-layer

CommandLine:

python -m ibeis.algo.hots.bayes --exec-show_model --show

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.bayes import *  # NOQA
>>> model = '?'
>>> evidence = {}
>>> soft_evidence = {}
>>> result = show_model(model, evidence, soft_evidence)
>>> print(result)
>>> ut.quit_if_noshow()
>>> import plottool as pt
>>> ut.show_if_requested()
ibeis.algo.hots.bayes.test_model(num_annots, num_names, score_evidence=[], name_evidence=[], other_evidence={}, noquery=False, verbose=None, **kwargs)[source]
ibeis.algo.hots.bayes.update_model_evidence(model, name_evidence, score_evidence, other_evidence)[source]

CommandLine:

python -m ibeis.algo.hots.bayes --exec-update_model_evidence

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.bayes import *  # NOQA
>>> verbose = True
>>> other_evidence = {}
>>> name_evidence = [0, 0, 1, 1, None]
>>> score_evidence = ['high', 'low', 'low', 'low', 'low', 'high']
>>> model = make_name_model(num_annots=5, num_names=3, verbose=True,
>>>                         mode=1)
>>> update_model_evidence(model, name_evidence, score_evidence,
>>>                       other_evidence)

ibeis.algo.hots.chip_match module

class ibeis.algo.hots.chip_match.AnnotMatch(cm, *args, **kwargs)[source]

Bases: ibeis.algo.hots.chip_match.MatchBaseIO, utool.util_dev.NiceRepr

This implements part the match between whole annotations and the other annotaions / names. This does not include algorithm specific feature matches.

algo_annot_scores
algo_name_scores
argsort(cm)[source]
as_dict(cm, *args, **kwargs)[source]
as_simple_dict(cm, keys=[])[source]
evaluate_dnids(cm, ibs)[source]
classmethod from_dict(class_dict, ibs=None)[source]

Convert dict of arguments back to ChipMatch object

get_annot_ranks(cm, daids)[source]
get_annot_scores(cm, daids, score_method=None)[source]
get_chip_shortlist_aids(cm, num_shortlist)[source]

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.chip_match import *  # NOQA
>>> ibs, qreq_, cm_list = plh.testdata_pre_sver('PZ_MTEST', qaid_list=[18])
>>> cm = cm_list[0]
>>> cm.score_nsum(qreq_)
>>> top_daids = cm.get_chip_shortlist_aids(5 * 2)
>>> assert cm.qnid in ibs.get_annot_name_rowids(top_daids)
get_groundtruth_daids(cm)[source]
get_groundtruth_flags(cm)[source]
get_name_ranks(cm, dnids)[source]
get_name_shortlist_aids(cm, nNameShortList, nAnnotPerName)[source]

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.chip_match import *  # NOQA
>>> ibs, qreq_, cm_list = plh.testdata_pre_sver('PZ_MTEST', qaid_list=[18])
>>> cm = cm_list[0]
>>> cm.score_nsum(qreq_)
>>> top_daids = cm.get_name_shortlist_aids(5, 2)
>>> assert cm.qnid in ibs.get_annot_name_rowids(top_daids)
get_nid_scores(cm, nid_list)[source]
get_num_matches_list(cm)[source]
get_ranked_nids(cm)[source]
get_ranked_nids_and_aids(cm)[source]

Hacky func

Returns:ibeis.algo.hots.name_scoring.NameScoreTup
get_top_aids(cm, ntop=None)[source]
get_top_gf_aids(cm, ibs, ntop=None)[source]
get_top_gt_aids(cm, ibs, ntop=None)[source]
get_top_nids(cm, ntop=None)[source]
get_top_scores(cm, ntop=None)[source]
get_top_truth_aids(cm, ibs, truth, ntop=None)[source]

top scoring aids of a certain truth value

groundtruth_daids
initialize(cm, qaid=None, daid_list=None, score_list=None, dnid_list=None, qnid=None, unique_nids=None, name_score_list=None, annot_score_list=None, autoinit=True)[source]

qaid and daid_list are not optional. fm_list and fsv_list are strongly encouraged and will probalby break things if they are not there.

ishow_analysis(cm, qreq_, **kwargs)[source]
name_argsort(cm)[source]
num_daids
ranks
set_cannonical_annot_score(cm, annot_score_list)[source]
set_cannonical_name_score(cm, annot_score_list, name_score_list)[source]
show_analysis(cm, qreq_, **kwargs)[source]
show_single_namematch(cm, qreq_, dnid, fnum=None, pnum=None, homog=False, **kwargs)[source]

HACK FOR ANNOT MATCH

to_dict(cm, ibs=None)[source]
unique_name_ranks
class ibeis.algo.hots.chip_match.ChipMatch(cm, *args, **kwargs)[source]

Bases: ibeis.algo.hots.chip_match._ChipMatchVisualization, ibeis.algo.hots.chip_match.AnnotMatch, ibeis.algo.hots.chip_match._ChipMatchScorers, ibeis.algo.hots.old_chip_match._OldStyleChipMatchSimulator

behaves as as the ChipMatchOldTup named tuple until we completely replace the old structure

append_featscore_column(cm, filtkey, filtweight_list, inplace=True)[source]
assert_self(cm, qreq_=None, ibs=None, strict=False, assert_feats=True, verbose=True)[source]
classmethod combine_cms(cm_list)[source]

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.core_annots import *  # NOQA
>>> ibs, depc, aid_list = testdata_core(size=4)
>>> request = depc.new_request('vsone', [1], [2, 3, 4], {'dim_size': 450})
>>> rawres_list2 = request.execute(postprocess=False)
>>> cm_list = ut.take_column(rawres_list2, 1)
>>> cls = ChipMatch
>>> out = ChipMatch.combine_cms(cm_list)
>>> out.score_nsum(request)
>>> ut.quit_if_noshow()
>>> out.ishow_analysis(request)
>>> ut.show_if_requested()
compress_annots(cm, flags, inplace=False, keepscores=True)[source]
compress_top_feature_matches(cm, num=10, rng=<module 'numpy.random' from '/usr/local/lib/python2.7/dist-packages/numpy/random/__init__.pyc'>, use_random=True)[source]

DO NOT USE

FIXME: Use boolean lists

Removes all but the best feature matches for testing purposes rng = np.random.RandomState(0)

dfxs_list
extend_results(cm, qreq_, other_aids=None)[source]

Return a new ChipMatch containing empty data for an extended set of aids

Parameters:
  • qreq (ibeis.QueryRequest) – query request object with hyper-parameters
  • other_aids (None) – (default = None)
Returns:

out

Return type:

ibeis.ChipMatch

CommandLine:

python -m ibeis.algo.hots.chip_match --exec-extend_results --show

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.chip_match import *  # NOQA
>>> import ibeis
>>> import ibeis
>>> cm, qreq_ = ibeis.testdata_cm('PZ_MTEST',
>>>                               a='default:dindex=0:10,qindex=0:1',
>>>                               t='best:SV=False')
>>> assert len(cm.daid_list) == 9
>>> cm.assert_self(qreq_)
>>> other_aids = qreq_.ibs.get_valid_aids()
>>> out = cm.extend_results(qreq_, other_aids)
>>> assert len(out.daid_list) == 118
>>> out.assert_self(qreq_)
classmethod from_dict(class_dict, ibs=None)[source]

Convert dict of arguments back to ChipMatch object

classmethod from_json(json_str)[source]

Convert json string back to ChipMatch object

CommandLine:

# FIXME; util_test is broken with classmethods
python -m ibeis.algo.hots.chip_match --test-from_json --show

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.chip_match import *  # NOQA
>>> import ibeis
>>> cls = ChipMatch
>>> cm1, qreq_ = ibeis.testdata_cm()
>>> json_str = cm1.to_json()
>>> cm = ChipMatch.from_json(json_str)
>>> ut.quit_if_noshow()
>>> cm.score_nsum(qreq_)
>>> cm.show_single_namematch(qreq_, 1)
>>> ut.show_if_requested()
classmethod from_unscored(prior_cm, fm_list, fs_list, H_list=None, fsv_col_lbls=None)[source]
classmethod from_vsmany_match_tup(vmt, qaid=None, fsv_col_lbls=None)[source]
Parameters:
Returns:

cm

Return type:

ibeis.ChipMatch

classmethod from_vsone_match_tup(vmt_list, daid_list=None, qaid=None, fsv_col_lbls=None)[source]
Parameters:
  • vmt_list (list of ValidMatchTup_) – list of valid_match_tups
  • qaid (int) – query annotation id
  • fsv_col_lbls (None) –
Returns:

cm

Return type:

ibeis.ChipMatch

get_annot_fm(cm, daid)[source]
get_cvs_str(cm, numtop=6, ibs=None, sort=True)[source]
Parameters:
  • numtop (int) – (default = 6)
  • ibs (IBEISController) – ibeis controller object(default = None)
  • sort (bool) – (default = True)
Returns:

csv_str

Return type:

str

Notes

Very weird that it got a score qaid 6 vs 41 has

[72, 79, 0, 17, 6, 60, 15, 36, 63] [72, 79, 0, 17, 6, 60, 15, 36, 63] [72, 79, 0, 17, 6, 60, 15, 36, 63] [0.060, 0.053, 0.0497, 0.040, 0.016, 0, 0, 0, 0] [7, 40, 41, 86, 103, 88, 8, 101, 35]

makes very little sense

CommandLine:

python -m ibeis.algo.hots.chip_match --test-get_cvs_str --force-serial

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.chip_match import *  # NOQA
>>> ibs, qreq_, cm_list = plh.testdata_post_sver()
>>> cm = cm_list[0]
>>> numtop = 6
>>> ibs = None
>>> sort = True
>>> csv_str = cm.get_cvs_str(numtop, ibs, sort)
>>> result = ('csv_str = \n%s' % (str(csv_str),))
>>> print(result)
get_flat_fm_info(cm, flags=None)[source]
Returns:info_
Return type:dict

CommandLine:

python -m ibeis.algo.hots.chip_match --exec-get_flat_fm_info --show

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.chip_match import *  # NOQA
>>> ibs, qreq_, cm_list = plh.testdata_pre_sver(
>>>     defaultdb='PZ_MTEST', qaid_list=[18])
>>> cm = cm_list[0]
>>> info_ = cm.get_flat_fm_info()
>>> ut.assert_all_eq(ut.lmap(len, info_.values()))
>>> result = ('info_ = %s' % (ut.repr3(info_, precision=2),))
>>> print(result)
get_fpath(cm, qreq_)[source]
get_fs(cm, idx=None, colx=None, daid=None, col=None)[source]
get_fs_list(cm, colx=None, col=None)[source]
get_fsv_prod_list(cm)[source]
get_inspect_str(cm, qreq_)[source]
Parameters:qreq (QueryRequest) – query request object with hyper-parameters
Returns:varinfo
Return type:str

CommandLine:

python -m ibeis.algo.hots.chip_match --exec-get_inspect_str

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.chip_match import *  # NOQA
>>> import ibeis
>>> cm, qreq_ = ibeis.testdata_cm('PZ_MTEST', a='default:dindex=0:10,qindex=0:1', t='best:SV=False')
>>> varinfo = cm.get_inspect_str(qreq_)
>>> result = ('varinfo = %s' % (str(varinfo),))
>>> print(result)
get_num_feat_score_cols(cm)[source]
get_rawinfostr(cm, colored=None)[source]
Returns:varinfo
Return type:str

CommandLine:

python -m ibeis.algo.hots.chip_match --exec-get_rawinfostr --show --cex

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.chip_match import *  # NOQA
>>> import ibeis
>>> cm, qreq_ = ibeis.testdata_cm('PZ_MTEST', a='default:dindex=0:10,qindex=0:1', t='best:SV=False')
>>> varinfo = cm.get_rawinfostr()
>>> result = ('varinfo = %s' % (varinfo,))
>>> print(result)
initialize(cm, qaid=None, daid_list=None, fm_list=None, fsv_list=None, fk_list=None, score_list=None, H_list=None, fsv_col_lbls=None, dnid_list=None, qnid=None, unique_nids=None, name_score_list=None, annot_score_list=None, autoinit=True, filtnorm_aids=None, filtnorm_fxs=None)[source]

qaid and daid_list are not optional. fm_list and fsv_list are strongly encouraged and will probalby break things if they are not there.

inspect_difference(cm, other)[source]
classmethod load_from_fpath(fpath, verbose=None)[source]
naids_list
nfxs_list
print_csv(cm, *args, **kwargs)[source]
print_inspect_str(cm, qreq_)[source]
print_rawinfostr(cm)[source]
qfxs_list
rrr(verbose=True)

special class reloading function

save(cm, qreq_, verbose=None)[source]
shortlist_subset(cm, top_aids)[source]

returns a new cmtup_old with only the requested daids TODO: rectify with take_feature_matches

sortself(cm)[source]

reorders the internal data using cm.score_list

take_annots(cm, idx_list, inplace=False, keepscores=True)[source]

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.chip_match import *  # NOQA
>>> import ibeis
>>> cm, qreq_ = ibeis.testdata_cm('PZ_MTEST',
>>>                               a='default:dindex=0:10,qindex=0:1',
>>>                               t='best:sv=false')
>>> idx_list = list(range(cm.num_daids))
>>> inplace = False
>>> keepscores = True
>>> other = out = cm.take_annots(idx_list, inplace, keepscores)
>>> result = ('out = %s' % (ut.repr2(out),))
>>> assert cm.inspect_difference(out), 'should be no difference'
>>> print(result)

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.chip_match import *  # NOQA
>>> import ibeis
>>> cm, qreq_ = ibeis.testdata_cm('PZ_MTEST',
>>>                               a='default:dindex=0:10,qindex=0:1',
>>>                               t='best:SV=False')
>>> idx_list = [0, 2]
>>> inplace = False
>>> keepscores = True
>>> other = out = cm.take_annots(idx_list, inplace, keepscores)
>>> result = ('out = %s' % (ut.repr2(out),))
>>> print(result)
take_feature_matches(cm, indicies_list, inplace=False, keepscores=True)[source]

Removes outlier feature matches TODO: rectify with shortlist_subset

Parameters:
  • indicies_list (list) – list of lists of indicies to keep. if an item is None, the match to the corresponding daid is removed.
  • inplace (bool) – (default = False)
Returns:

out

Return type:

ibeis.ChipMatch

CommandLine:

python -m ibeis.algo.hots.chip_match --exec-take_feature_matches --show

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.chip_match import *  # NOQA
>>> import ibeis
>>> cm, qreq_ = ibeis.testdata_cm('PZ_MTEST', a='default:dindex=0:10,qindex=0:1', t='best:SV=False')
>>> indicies_list = [list(range(i + 1)) for i in range(cm.num_daids)]
>>> inplace = False
>>> keepscores = True
>>> out = cm.take_feature_matches(indicies_list, inplace, keepscores)
>>> assert not cm.inspect_difference(out), 'should be different'
>>> result = ('out = %s' % (ut.repr2(out),))
>>> print(result)
to_json(cm)[source]

Serialize ChipMatch object as JSON string

CommandLine:

python -m ibeis.algo.hots.chip_match --test-ChipMatch.to_json:0
python -m ibeis.algo.hots.chip_match --test-ChipMatch.to_json
python -m ibeis.algo.hots.chip_match --test-ChipMatch.to_json:1 --show

Example

>>> # ENABLE_DOCTEST
>>> # Simple doctest demonstrating the json format
>>> from ibeis.algo.hots.chip_match import *  # NOQA
>>> import ibeis
>>> ibs = ibeis.opendb(defaultdb='testdb1')
>>> cm, qreq_ = ibs.query_chips(1, [2, 3, 4, 5],
>>>                             return_request=True)
>>> cm.compress_top_feature_matches(num=4, rng=np.random.RandomState(0))
>>> # Serialize
>>> print('\n\nRaw ChipMatch JSON:\n')
>>> json_str = cm.to_json()
>>> print(json_str)
>>> print('\n\nPretty ChipMatch JSON:\n')
>>> # Pretty String Formatting
>>> dictrep = ut.from_json(json_str)
>>> dictrep = ut.delete_dict_keys(dictrep, [key for key, val in dictrep.items() if val is None])
>>> result  = ut.dict_str(dictrep, nl=2, precision=2, hack_liststr=True, key_order_metric='strlen')
>>> result = result.replace('u\'', '"').replace('\'', '"')
>>> print(result)

Example

>>> # ENABLE_DOCTEST
>>> # test to convert back and forth from json
>>> from ibeis.algo.hots.chip_match import *  # NOQA
>>> import ibeis
>>> cm, qreq_ = ibeis.testdata_cm()
>>> cm1 = cm
>>> # Serialize
>>> json_str = cm.to_json()
>>> print(repr(json_str))
>>> # Unserialize
>>> cm = ChipMatch.from_json(json_str)
>>> # Show if it works
>>> ut.quit_if_noshow()
>>> cm.score_nsum(qreq_)
>>> cm.show_single_namematch(qreq_, 1)
>>> ut.show_if_requested()
>>> # result = ('json_str = \n%s' % (str(json_str),))
>>> # print(result)
class ibeis.algo.hots.chip_match.MatchBaseIO[source]

Bases: object

copy()[source]
classmethod load_from_fpath(fpath, verbose=False)[source]
save_to_fpath(cm, fpath, verbose=False)[source]

CommandLine:

python ibeis --tf MatchBaseIO.save_to_fpath --verbtest --show

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.chip_match import *  # NOQA
>>> qaid = 18
>>> ibs, qreq_, cm_list = plh.testdata_pre_sver('PZ_MTEST', qaid_list=[qaid])
>>> cm = cm_list[0]
>>> cm.score_nsum(qreq_)
>>> dpath = ut.get_app_resource_dir('ibeis')
>>> fpath = join(dpath, 'tmp_chipmatch.cPkl')
>>> ut.delete(fpath)
>>> cm.save_to_fpath(fpath)
>>> cm2 = ChipMatch.load_from_fpath(fpath)
>>> assert cm == cm2
>>> ut.quit_if_noshow()
>>> cm.ishow_analysis(qreq_)
>>> ut.show_if_requested()
exception ibeis.algo.hots.chip_match.NeedRecomputeError[source]

Bases: exceptions.Exception

class ibeis.algo.hots.chip_match.TestLogger(testlog, verbose=True)[source]

Bases: object

context(testlog, name)[source]
end_test(testlog)[source]
log_failed(testlog, msg)[source]
log_passed(testlog, msg)[source]
log_skipped(testlog, msg)[source]
skip_test(testlog)[source]
start_test(testlog, name)[source]
ibeis.algo.hots.chip_match.check_arrs_eq(arr1, arr2)[source]
ibeis.algo.hots.chip_match.extend_nplists(x_list, num, shape, dtype)[source]
ibeis.algo.hots.chip_match.extend_nplists_(x_list, num, shape, dtype)[source]
ibeis.algo.hots.chip_match.extend_pylist(x_list, num, val)[source]
ibeis.algo.hots.chip_match.extend_pylist_(x_list, num, val)[source]
ibeis.algo.hots.chip_match.extend_scores(vals, num)[source]
ibeis.algo.hots.chip_match.filtnorm_op(filtnorm_, op_, *args, **kwargs)[source]
ibeis.algo.hots.chip_match.get_chipmatch_fname(qaid, qreq_, qauuid=None, cfgstr=None, TRUNCATE_UUIDS=False, MAX_FNAME_LEN=200)[source]

CommandLine:

python -m ibeis.algo.hots.chip_match --test-get_chipmatch_fname

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.chip_match import *  # NOQA
>>> qreq_, args = plh.testdata_pre('spatial_verification',
>>>                                defaultdb='PZ_MTEST', qaid_override=[18],
>>>                                p='default:sqrd_dist_on=True')
>>> cm_list = args.cm_list_FILT
>>> cm = cm_list[0]
>>> fname = get_chipmatch_fname(cm.qaid, qreq_, qauuid=None,
>>>                             TRUNCATE_UUIDS=False, MAX_FNAME_LEN=200)
>>> result = fname
>>> print(result)

qaid=18_cm_cvgrsbnffsgifyom_quuid=a126d459-b730-573e-7a21-92894b016565.cPkl

ibeis.algo.hots.chip_match.prepare_dict_uuids(class_dict, ibs)[source]

Hacks to ensure proper uuid conversion

ibeis.algo.hots.chip_match.safe_check_lens_eq(None, 1)[source]

safe_check_lens_eq([3], [2, 4])

ibeis.algo.hots.chip_match.safe_check_nested_lens_eq(None, 1)[source]

safe_check_nested_lens_eq([[3, 4]], [[2, 4]]) safe_check_nested_lens_eq([[1, 2, 3], [1, 2]], [[1, 2, 3], [1, 2]]) safe_check_nested_lens_eq([[1, 2, 3], [1, 2]], [[1, 2, 3], [1]])

ibeis.algo.hots.chip_match.safeop(op_, xs, *args, **kwargs)[source]
ibeis.algo.hots.chip_match.testdata_cm()[source]

ibeis.algo.hots.crf module

ibeis.algo.hots.crf.chain_crf()[source]
ibeis.algo.hots.crf.crftest()[source]

pip install pyqpbo pip install pystruct

http://taku910.github.io/crfpp/#install

cd ~/tmp #wget https://drive.google.com/folderview?id=0B4y35FiV1wh7fngteFhHQUN2Y1B5eUJBNHZUemJYQV9VWlBUb3JlX0xBdWVZTWtSbVBneU0&usp=drive_web#list 7z x CRF++-0.58.tar.gz 7z x CRF++-0.58.tar cd CRF++-0.58 chmod +x configure ./configure make

ibeis.algo.hots.demobayes module

ibeis.algo.hots.demobayes.classify_k(cfg={})[source]

CommandLine:

python -m ibeis.algo.hots.demobayes --exec-classify_k --show --ev :nA=3
python -m ibeis.algo.hots.demobayes --exec-classify_k --show --ev :nA=3,k=1
python -m ibeis.algo.hots.demobayes --exec-classify_k --show --ev :nA=3,k=0 --method=approx
python -m ibeis.algo.hots.demobayes --exec-classify_k --show --ev :nA=10,k=1 --method=approx

Example

>>> from ibeis.algo.hots.demobayes import *  # NOQA
>>> cfg_list = testdata_demo_cfgs()
>>> classify_k(cfg_list[0])
>>> ut.show_if_requested()
ibeis.algo.hots.demobayes.classify_one_new_unknown()[source]

Make a model that knows who the previous annots are and tries to classify a new annot

CommandLine:

python -m ibeis.algo.hots.demobayes --exec-classify_one_new_unknown --verbose
python -m ibeis.algo.hots.demobayes --exec-classify_one_new_unknown --show --verbose --present
python3 -m ibeis.algo.hots.demobayes --exec-classify_one_new_unknown --verbose
python3 -m ibeis.algo.hots.demobayes --exec-classify_one_new_unknown --verbose --diskshow --verbose --present --save demo5.png --dpath . --figsize=20,10 --dpi=128 --clipwhite

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.demobayes import *  # NOQA
>>> result = classify_one_new_unknown()
>>> ut.show_if_requested()
ibeis.algo.hots.demobayes.demo_ambiguity()[source]

Test what happens when an annotation need to choose between one of two names

CommandLine:

python -m ibeis.algo.hots.demobayes --exec-demo_ambiguity --show --verbose --present

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.demobayes import *  # NOQA
>>> result = demo_ambiguity()
>>> ut.show_if_requested()
ibeis.algo.hots.demobayes.demo_annot_idependence_overlap()[source]
Given:
  • an unknown annotation d
  • three annots with the same name (Fred) a, b, and c
  • a and b are near duplicates
  • (a and c) / (b and c) are novel views
Goal:
  • If d matches to a and b the probably that d is Fred should not be much more than if d matched only a or only b.

  • The probability that d is Fred given it matches to any of the 3 annots

    alone should be equal

    P(d is Fred | Mad=1) = P(d is Fred | Mbd=1) = P(d is Fred | Mcd=1)

  • The probability that d is fred given two matches to any of those two annots should be greater than the probability given only one.

    P(d is Fred | Mad=1, Mbd=1) > P(d is Fred | Mad=1) P(d is Fred | Mad=1, Mcd=1) > P(d is Fred | Mad=1)

  • The probability that d is fred given matches to two near duplicate matches should be less than if d matches two non-duplicate matches.

    P(d is Fred | Mad=1, Mcd=1) > P(d is Fred | Mad=1, Mbd=1)

  • The probability that d is fred given two near duplicates should be only epsilon greater than a match to either one individually.

    P(d is Fred | Mad=1, Mbd=1) = P(d is Fred | Mad=1) + epsilon

Method:

We need to model the fact that there are other causes that create the effect of a high score. Namely, near duplicates. This can be done by adding an extra conditional that score depends on if they match as well as if they are near duplicates.

P(S_ij | Mij) –> P(S_ij | Mij, Dij)

where

Dij is a random variable indicating if the image is a near duplicate.

We can model this as an independant variable

P(Dij) = {True: .5, False: .5}

or as depending on if the names match.

P(Dij | Mij) = {‘same’: {True: .5, False: .5} diff: {True: 0, False 1}}

CommandLine:

python -m ibeis.algo.hots.demobayes --exec-demo_annot_idependence_overlap --verbose --present --show

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.demobayes import *  # NOQA
>>> result = demo_annot_idependence_overlap()
>>> ut.show_if_requested()
ibeis.algo.hots.demobayes.demo_bayesnet(cfg={})[source]

Make a model that knows who the previous annots are and tries to classify a new annot

CommandLine:

python -m ibeis --tf demo_bayesnet --diskshow --verbose --save demo4.png --dpath . --figsize=20,10 --dpi=128 --clipwhite

python -m ibeis --tf demo_bayesnet --ev :nA=3,Sab=0,Sac=0,Sbc=1
python -m ibeis --tf demo_bayesnet --ev :nA=4,Sab=0,Sac=0,Sbc=1,Sbd=1 --show
python -m ibeis --tf demo_bayesnet --ev :nA=4,Sab=0,Sac=0,Sbc=1,Scd=1 --show
python -m ibeis --tf demo_bayesnet --ev :nA=4,Sab=0,Sac=0,Sbc=1,Sbd=1,Scd=1 --show

python -m ibeis --tf demo_bayesnet --ev :nA=3,Sab=0,Sac=0,Sbc=1
python -m ibeis --tf demo_bayesnet --ev :nA=5,rand_scores=True --show

python -m ibeis --tf demo_bayesnet --ev :nA=4,nS=3,rand_scores=True --show --verbose
python -m ibeis --tf demo_bayesnet --ev :nA=5,nS=2,Na=fred,rand_scores=True --show --verbose
python -m ibeis --tf demo_bayesnet --ev :nA=5,nS=5,Na=fred,rand_scores=True --show --verbose
python -m ibeis --tf demo_bayesnet --ev :nA=4,nS=2,Na=fred,rand_scores=True --show --verbose

python -m ibeis.algo.hots.demobayes --exec-demo_bayesnet \
        --ev =:nA=4,Sab=0,Sac=0,Sbc=1 \
        :Sbd=1 :Scd=1 :Sbd=1,Scd=1 :Sbd=1,Scd=1,Sad=0 \
        --show --present

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.demobayes import *  # NOQA
>>> cfg_list = testdata_demo_cfgs()
>>> print('cfg_list = %r' % (cfg_list,))
>>> for cfg in cfg_list:
>>>     demo_bayesnet(cfg)
>>> ut.show_if_requested()
ibeis.algo.hots.demobayes.demo_conflicting_evidence()[source]

Notice that the number of annotations in the graph does not affect the probability of names.

ibeis.algo.hots.demobayes.demo_model_idependencies()[source]

Independences of the 3 annot 3 name model

CommandLine:

python -m ibeis.algo.hots.demobayes --exec-demo_model_idependencies --mode=1 --num-names=2 --show
python -m ibeis.algo.hots.demobayes --exec-demo_model_idependencies --mode=2

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.demobayes import *  # NOQA
>>> result = demo_model_idependencies()
>>> print(result)
>>> ut.show_if_requested()
ibeis.algo.hots.demobayes.demo_modes()[source]

Look at the last result of the different names demo under differet modes

ibeis.algo.hots.demobayes.demo_name_annot_complexity()[source]

This demo is meant to show the structure of the graph as more annotations and names are added.

CommandLine:

python -m ibeis.algo.hots.demobayes --exec-demo_name_annot_complexity --show

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.demobayes import *  # NOQA
>>> demo_name_annot_complexity()
>>> ut.show_if_requested()
ibeis.algo.hots.demobayes.demo_single_add()[source]

This demo shows how a name is assigned to a new annotation.

CommandLine:

python -m ibeis.algo.hots.demobayes --exec-demo_single_add --show --present --mode=1

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.demobayes import *  # NOQA
>>> demo_single_add()
>>> ut.show_if_requested()
ibeis.algo.hots.demobayes.demo_structure()[source]

CommandLine:

python -m ibeis.algo.hots.demobayes --exec-demo_structure --show

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.demobayes import *  # NOQA
>>> result = demo_structure()
>>> ut.show_if_requested()
ibeis.algo.hots.demobayes.get_toy_annots(num_annots, num_names=None, initial_aids=None, initial_nids=None, nid_sequence=None, seed=None)[source]
Parameters:
  • num_annots (int) –
  • num_names (int) – (default = None)
  • initial_aids (None) – (default = None)
  • initial_nids (None) – (default = None)
  • nid_sequence (None) – (default = None)
  • seed (None) – (default = None)
Returns:

(aids, nids, aids1, nids1, all_aids, all_nids)

Return type:

tuple

CommandLine:

python -m ibeis.algo.hots.demobayes --exec-get_toy_annots

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.demobayes import *  # NOQA
>>> num_annots = 1
>>> num_names = 5
>>> initial_aids = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype=np.int64)
>>> initial_nids = np.array([0, 0, 1, 2, 2, 1, 1, 1, 2, 3], dtype=np.int64)
>>> nid_sequence = np.array([0, 0, 1, 2, 2, 1, 1], dtype=np.int64)
>>> seed = 0
>>> (aids, nids, aids1, nids1, all_aids, all_nids) = get_toy_annots(num_annots, num_names, initial_aids, initial_nids, nid_sequence, seed)
>>> result = ('(aids, nids, aids1, nids1, all_aids, all_nids) = %s' % (ut.repr2((aids, nids, aids1, nids1, all_aids, all_nids), nl=1),))
>>> print(result)
ibeis.algo.hots.demobayes.get_toy_data_1v1(num_annots=5, num_names=None, **kwargs)[source]

CommandLine:

python -m ibeis.algo.hots.demobayes --exec-get_toy_data_1v1 --show

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.demobayes import *  # NOQA
>>> toy_data = get_toy_data_1v1()
>>> ut.quit_if_noshow()
>>> import plottool as pt
>>> show_toy_distributions(toy_data['toy_params'])
>>> ut.show_if_requested()
Example1:
>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.demobayes import *  # NOQA
>>> toy_data = get_toy_data_1v1()
>>> kwargs = {}
>>> initial_aids = toy_data['aids']
>>> initial_nids = toy_data['nids']
>>> num_annots = 1
>>> num_names = 6
>>> toy_data2 = get_toy_data_1v1(num_annots, num_names, initial_aids=initial_aids, initial_nids=initial_nids)
>>> ut.quit_if_noshow()
>>> import plottool as pt
>>> show_toy_distributions(toy_data['toy_params'])
>>> ut.show_if_requested()
ibeis.algo.hots.demobayes.get_toy_data_1vM(num_annots, num_names=None, **kwargs)[source]
Parameters:
  • num_annots (int) –
  • num_names (int) – (default = None)
Kwargs:
initial_aids, initial_nids, nid_sequence, seed
Returns:(pair_list, feat_list)
Return type:tuple

CommandLine:

python -m ibeis.algo.hots.demobayes --exec-get_toy_data_1vM --show

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.demobayes import *  # NOQA
>>> num_annots = 1000
>>> num_names = 40
>>> get_toy_data_1vM(num_annots, num_names)
>>> ut.quit_if_noshow()
>>> import plottool as pt
>>> ut.show_if_requested()
ibeis.algo.hots.demobayes.learn_prob_score(num_scores=5, pad=55, ret_enc=False, use_cache=None)[source]
Parameters:num_scores (int) – (default = 5)
Returns:(discr_domain, discr_p_same)
Return type:tuple

CommandLine:

python -m ibeis.algo.hots.demobayes --exec-learn_prob_score --show

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.demobayes import *  # NOQA
>>> num_scores = 2
>>> (discr_domain, discr_p_same, encoder) = learn_prob_score(num_scores, ret_enc=True, use_cache=False)
>>> print('discr_p_same = %r' % (discr_p_same,))
>>> ut.quit_if_noshow()
>>> import plottool as pt
>>> encoder.visualize()
>>> ut.show_if_requested()
ibeis.algo.hots.demobayes.make_bayes_notebook()[source]

CommandLine:

python -m ibeis.algo.hots.demobayes --exec-make_bayes_notebook

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.demobayes import *  # NOQA
>>> result = make_bayes_notebook()
>>> print(result)
ibeis.algo.hots.demobayes.show_model_templates()[source]

CommandLine:

python -m ibeis.algo.hots.demobayes --exec-show_model_templates

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.demobayes import *  # NOQA
>>> result = show_model_templates()
>>> ut.show_if_requested()
ibeis.algo.hots.demobayes.show_toy_distributions(toy_params)[source]
ibeis.algo.hots.demobayes.test_triangle_property()[source]

CommandLine:

python -m ibeis.algo.hots.demobayes --exec-test_triangle_property --show

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.demobayes import *  # NOQA
>>> result = test_triangle_property()
>>> ut.show_if_requested()
ibeis.algo.hots.demobayes.testdata_demo_cfgs()[source]

ibeis.algo.hots.devcases module

development module storing my “development state”

TODO:
  • figure out what packages I use have lisencing issues.
    • Reimplement them or work around them.

Excplitict Negative Matches between chips

ibeis.algo.hots.devcases.find_close_incorrect_match(ibs, qaids)[source]
ibeis.algo.hots.devcases.fix_pz_master()[source]

CommandLine:

python -m ibeis.algo.hots.devcases --test-fix_pz_master --show

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.devcases import *  # NOQA
>>> # build test data
>>> # execute function
>>> result = fix_pz_master()
>>> # verify results
>>> print(result)
ibeis.algo.hots.devcases.get_dev_test_fpaths(index)[source]
ibeis.algo.hots.devcases.get_gzall_small_test()[source]

ibs.get_annot_visual_uuids([qaid, aid])

ibeis.algo.hots.devcases.get_pz_master_testcase()[source]
ibeis.algo.hots.devcases.load_gztest(ibs)[source]

CommandLine:

python -m ibeis.algo.hots.special_query --test-load_gztest

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.devcases import *  # NOQA
>>> import ibeis
>>> ibs = ibeis.opendb('GZ_ALL')
ibeis.algo.hots.devcases.myquery()[source]
BUG::

THERE IS A BUG SOMEWHERE: HOW IS THIS POSSIBLE? if everything is weightd ) how di the true positive even get a score while the true negative did not qres_copy.filtkey_list = [‘ratio’, ‘fg’, ‘homogerr’, ‘distinctiveness’] CORRECT STATS {

‘max’ : [0.832, 0.968, 0.604, 0.000], ‘min’ : [0.376, 0.524, 0.000, 0.000], ‘mean’ : [0.561, 0.924, 0.217, 0.000], ‘std’ : [0.114, 0.072, 0.205, 0.000], ‘nMin’ : [1, 1, 1, 51], ‘nMax’ : [1, 1, 1, 1], ‘shape’: (52, 4),

} INCORRECT STATS {

‘max’ : [0.759, 0.963, 0.264, 0.000], ‘min’ : [0.379, 0.823, 0.000, 0.000], ‘mean’ : [0.506, 0.915, 0.056, 0.000], ‘std’ : [0.125, 0.039, 0.078, 0.000], ‘nMin’ : [1, 1, 1, 24], ‘nMax’ : [1, 1, 1, 1], ‘shape’: (26, 4),
# score_diff, tp_score, tn_score, p, K, dcvs_clip_max, fg_power, homogerr_power
0.494, 0.494, 0.000, 73.000, 2, 0.500, 0.100, 10.000

see how seperability changes as we very things

CommandLine:

python -m ibeis.algo.hots.devcases --test-myquery
python -m ibeis.algo.hots.devcases --test-myquery --show --index 0
python -m ibeis.algo.hots.devcases --test-myquery --show --index 1
python -m ibeis.algo.hots.devcases --test-myquery --show --index 2

References

http://en.wikipedia.org/wiki/Pareto_distribution <- look into

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.all_imports import *  # NOQA
>>> from ibeis.algo.hots.devcases import *  # NOQA
>>> ut.dev_ipython_copypaster(myquery) if ut.inIPython() else myquery()
>>> pt.show_if_requested()
ibeis.algo.hots.devcases.show_power_law_plots()[source]

CommandLine:

python -m ibeis.algo.hots.devcases --test-show_power_law_plots --show

Example

>>> # DISABLE_DOCTEST
>>> #%pylab qt4
>>> from ibeis.all_imports import *  # NOQA
>>> from ibeis.algo.hots.devcases import *  # NOQA
>>> show_power_law_plots()
>>> pt.show_if_requested()
ibeis.algo.hots.devcases.testdata_my_exmaples(index)[source]

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.all_imports import *  # NOQA
>>> from ibeis.algo.hots.devcases import *  # NOQA
>>> index = 1

ibeis.algo.hots.distinctiveness_normalizer module

External mechanism for computing feature distinctiveness

stores some set of vectors which lose their association with their parent.

class ibeis.algo.hots.distinctiveness_normalizer.DistinctivnessNormalizer(dstcnvs_normer, species, cachedir=None)[source]

Bases: utool.util_cache.Cachable

add_support(dstcnvs_normer, new_vecs)[source]
archive(dstcnvs_normer, cachedir=None, overwrite=False)[source]
ensure_flann(dstcnvs_normer, cachedir=None)[source]
exists(dstcnvs_normer, cachedir=None, verbose=True, need_flann=False, *args, **kwargs)[source]
Parameters:
  • cachedir (str) – cache directory
  • verbose (bool) – verbosity flag
Returns:

load_success

Return type:

flag

CommandLine:

python -m ibeis.algo.hots.distinctiveness_normalizer --test-exists

Example

>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.distinctiveness_normalizer import *  # NOQA
>>> # build test data
>>> dstcnvs_normer = testdata_distinctiveness()[0]
>>> assert dstcnvs_normer.exists()
ext = u'.cPkl'
get_cfgstr(dstcnvs_normer)[source]
get_distinctiveness(dstcnvs_normer, qfx2_vec, dcvs_K=2, dcvs_power=1.0, dcvs_max_clip=1.0, dcvs_min_clip=0.0)[source]
Parameters:qfx2_vec (ndarray) – mapping from query feature index to vec

CommandLine:

python -m ibeis.algo.hots.distinctiveness_normalizer --test-get_distinctiveness --show
python -m ibeis.algo.hots.distinctiveness_normalizer --test-get_distinctiveness --db GZ_ALL --show
python -m ibeis.algo.hots.distinctiveness_normalizer --test-get_distinctiveness --show --dcvs_power .25
python -m ibeis.algo.hots.distinctiveness_normalizer --test-get_distinctiveness --show --dcvs_power .5
python -m ibeis.algo.hots.distinctiveness_normalizer --test-get_distinctiveness --show --dcvs_power .1
python -m ibeis.algo.hots.distinctiveness_normalizer --test-get_distinctiveness --show --dcvs_K 1&
python -m ibeis.algo.hots.distinctiveness_normalizer --test-get_distinctiveness --show --dcvs_K 2&
python -m ibeis.algo.hots.distinctiveness_normalizer --test-get_distinctiveness --show --dcvs_K 3&

Example

>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.distinctiveness_normalizer import *  # NOQA
>>> dstcnvs_normer, qreq_ = testdata_distinctiveness()
>>> qaid = qreq_.get_external_qaids()[0]
>>> qfx2_vec = qreq_.ibs.get_annot_vecs(qaid, config2_=qreq_.qparams)
>>> default_dict = {'dcvs_power': .25, 'dcvs_K': 5, 'dcvs_max_clip': .5}
>>> kwargs = ut.argparse_dict(default_dict)
>>> qfx2_dstncvs = dstcnvs_normer.get_distinctiveness(qfx2_vec, **kwargs)
>>> ut.assert_eq(len(qfx2_dstncvs.shape), 1)
>>> assert np.all(qfx2_dstncvs) <= 1
>>> assert np.all(qfx2_dstncvs) >= 0
>>> ut.quit_if_noshow()
>>> # Show distinctivness on an animal and a corresponding graph
>>> import plottool as pt
>>> chip = qreq_.ibs.get_annot_chips(qaid)
>>> qfx2_kpts = qreq_.ibs.get_annot_kpts(qaid, config2_=qreq_.qparams)
>>> show_chip_distinctiveness_plot(chip, qfx2_kpts, qfx2_dstncvs)
>>> #pt.figure(2)
>>> #pt.show_all_colormaps()
>>> pt.show_if_requested()
get_flann_fpath(dstcnvs_normer, cachedir)[source]
get_prefix(dstcnvs_normer)[source]
init_support(dstcnvs_normer, vecs, verbose=True)[source]
load(dstcnvs_normer, cachedir=None, verbose=True, *args, **kwargs)[source]
load_or_build_flann(dstcnvs_normer, cachedir=None, verbose=True, *args, **kwargs)[source]
prefix = u'distinctivness'
publish(dstcnvs_normer, cachedir=None)[source]

Sets this as the default normalizer available for download ONLY DEVELOPERS CAN PERFORM THIS OPERATION

Parameters:cachedir (str) –

CommandLine:

python -m ibeis.algo.hots.distinctiveness_normalizer --test-publish

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.distinctiveness_normalizer import *  # NOQA
>>> dstcnvs_normer = testdata_distinctiveness()[0]
>>> dstcnvs_normer.rebuild()
>>> dstcnvs_normer.save()
>>> result = dstcnvs_normer.publish(cachedir)
>>> # verify results
>>> print(result)
rebuild(dstcnvs_normer, verbose=True, quiet=False)[source]
save(dstcnvs_normer, cachedir=None, verbose=True, *args, **kwargs)[source]

args = tuple() kwargs = {}

save_flann(dstcnvs_normer, cachedir=None, verbose=True)[source]
ibeis.algo.hots.distinctiveness_normalizer.clear_distinctivness_cache(j)[source]
ibeis.algo.hots.distinctiveness_normalizer.compute_distinctiveness_from_dist(norm_dist, dcvs_power, dcvs_max_clip, dcvs_min_clip)[source]

Compute distinctiveness from distance to dcvs_K+1 nearest neighbor

ibeis.algo.hots.distinctiveness_normalizer.dev_train_distinctiveness(species=None)[source]
Parameters:

CommandLine:

python -m ibeis.algo.hots.distinctiveness_normalizer --test-dev_train_distinctiveness

alias dev_train_distinctiveness='python -m ibeis.algo.hots.distinctiveness_normalizer --test-dev_train_distinctiveness'
# Publishing (uses cached normalizers if available)
dev_train_distinctiveness --species GZ --publish
dev_train_distinctiveness --species PZ --publish
dev_train_distinctiveness --species PZ --retrain

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.distinctiveness_normalizer import *  # NOQA
>>> import ibeis
>>> species = ut.get_argval('--species', str, 'zebra_grevys')
>>> dev_train_distinctiveness(species)
ibeis.algo.hots.distinctiveness_normalizer.download_baseline_distinctiveness_normalizer(cachedir, species)[source]
ibeis.algo.hots.distinctiveness_normalizer.list_distinctivness_cache()[source]
ibeis.algo.hots.distinctiveness_normalizer.list_published_distinctivness()[source]

CommandLine:

python -m ibeis.algo.hots.distinctiveness_normalizer --test-list_published_distinctivness

Example

>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.distinctiveness_normalizer import *  # NOQA
>>> published_fpaths = list_published_distinctivness()
>>> print(ut.list_str(published_fpaths))
ibeis.algo.hots.distinctiveness_normalizer.request_ibeis_distinctiveness_normalizer(qreq_, verbose=True)[source]
Parameters:qreq (QueryRequest) – query request object with hyper-parameters

CommandLine:

python -m ibeis.algo.hots.distinctiveness_normalizer --test-request_ibeis_distinctiveness_normalizer

Example

>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.distinctiveness_normalizer import *  # NOQA
>>> import ibeis
>>> # build test data
>>> ibs = ibeis.opendb('testdb1')
>>> daids = ibs.get_valid_aids(species=ibeis.const.TEST_SPECIES.ZEB_PLAIN)
>>> qaids = ibs.get_valid_aids(species=ibeis.const.TEST_SPECIES.ZEB_PLAIN)
>>> qreq_ = ibs.new_query_request(qaids, daids)
>>> # execute function
>>> dstcnvs_normer = request_ibeis_distinctiveness_normalizer(qreq_)
>>> # verify results
>>> assert dstcnvs_normer is not None
ibeis.algo.hots.distinctiveness_normalizer.request_species_distinctiveness_normalizer(species, cachedir=None, verbose=False)[source]

helper function to get distinctivness model independent of IBEIS.

ibeis.algo.hots.distinctiveness_normalizer.show_chip_distinctiveness_plot(chip, kpts, dstncvs, fnum=1, pnum=None)[source]
ibeis.algo.hots.distinctiveness_normalizer.test_single_annot_distinctiveness_params(ibs, aid)[source]

CommandLine:

python -m ibeis.algo.hots.distinctiveness_normalizer --test-test_single_annot_distinctiveness_params --show
python -m ibeis.algo.hots.distinctiveness_normalizer --test-test_single_annot_distinctiveness_params --show --db GZ_ALL

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.distinctiveness_normalizer import *  # NOQA
>>> import plottool as pt
>>> import ibeis
>>> # build test data
>>> ibs = ibeis.opendb(ut.get_argval('--db', type_=str, default='PZ_MTEST'))
>>> aid = ut.get_argval('--aid', type_=int, default=1)
>>> # execute function
>>> test_single_annot_distinctiveness_params(ibs, aid)
>>> pt.show_if_requested()
ibeis.algo.hots.distinctiveness_normalizer.testdata_distinctiveness()[source]

Example

>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.distinctiveness_normalizer import *  # NOQA
>>> dstcnvs_normer, qreq_ = testdata_distinctiveness()
ibeis.algo.hots.distinctiveness_normalizer.view_distinctiveness_model_dir()[source]

CommandLine:

python -m ibeis.algo.hots.distinctiveness_normalizer --test-view_distinctiveness_model_dir

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.distinctiveness_normalizer import *  # NOQA
>>> view_distinctiveness_model_dir()
ibeis.algo.hots.distinctiveness_normalizer.view_publish_dir()[source]

CommandLine:

python -m ibeis.algo.hots.distinctiveness_normalizer --test-view_publish_dir

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.distinctiveness_normalizer import *  # NOQA
>>> view_publish_dir()

ibeis.algo.hots.exceptions module

exception ibeis.algo.hots.exceptions.HotsCacheMissError[source]

Bases: exceptions.Exception

exception ibeis.algo.hots.exceptions.HotsNeedsRecomputeError[source]

Bases: exceptions.Exception

ibeis.algo.hots.exceptions.NoDescriptorsException(ibs, qaid)[source]
exception ibeis.algo.hots.exceptions.QueryException(msg)[source]

Bases: exceptions.Exception

ibeis.algo.hots.graph_iden module

class ibeis.algo.hots.graph_iden.AnnotInference(infr, qreq_, cm_list, user_feedback=None)[source]

Bases: object

Make name inferences about a series of AnnotMatches

CommandLine:

python -m ibeis.algo.hots.graph_iden AnnotInference --show --no-cnn

Example

>>> from ibeis.algo.hots.graph_iden import *  # NOQA
>>> import ibeis
>>> #qreq_ = ibeis.testdata_qreq_(default_qaids=[1, 2, 3, 4], default_daids=[2, 3, 4, 5, 6, 7, 8, 9, 10])
>>> # a='default:dsize=20,excluderef=True,qnum_names=5,min_pername=3,qsample_per_name=1,dsample_per_name=2',
>>> a='default:dsize=20,excluderef=True,qnum_names=5,qsize=1,min_pername=3,qsample_per_name=1,dsample_per_name=2'
>>> qreq_ = ibeis.testdata_qreq_(defaultdb='PZ_MTEST', a=a, verbose=0, use_cache=False)
>>> # a='default:dsize=2,qsize=1,excluderef=True,qnum_names=5,min_pername=3,qsample_per_name=1,dsample_per_name=2',
>>> ibs = qreq_.ibs
>>> cm_list = qreq_.execute()
>>> self1 = AnnotInference(qreq_, cm_list)
>>> inf_dict1 = self1.make_annot_inference_dict(True)
>>> user_feedback =  self1.simulate_user_feedback()
>>> self2 = AnnotInference(qreq_, cm_list, user_feedback)
>>> inf_dict2 = self2.make_annot_inference_dict(True)
>>> print('inference_dict = ' + ut.repr3(inf_dict1, nl=3))
>>> print('inference_dict2 = ' + ut.repr3(inf_dict2, nl=3))
>>> ut.quit_if_noshow()
>>> graph1 = self1.make_graph(show=True)
>>> graph2 = self2.make_graph(show=True)
>>> ut.show_if_requested()
choose_thresh(infr)[source]
infer_cut(infr)[source]
initialize_graph_and_model(infr)[source]

Unused in internal split stuff

pt.qt4ensure() layout_info = pt.show_nx(graph, as_directed=False, fnum=1,

layoutkw=dict(prog=’neato’), use_image=True, verbose=0)

ax = pt.gca() pt.zoom_factory() pt.interactions.PanEvents()

make_annot_inference_dict(infr, internal=False)[source]
make_clusters(infr)[source]
make_graph(infr, show=False)[source]
make_inference(infr)[source]
make_prob_annots(infr)[source]
make_prob_names(infr)[source]
rrr(verbose=True)

special class reloading function

simulate_user_feedback(infr)[source]
class ibeis.algo.hots.graph_iden.InfrModel(model, graph)[source]

Bases: utool.util_dev.NiceRepr

estimate_threshold(model, method=None)[source]

import plottool as pt idx3 = vt.find_elbow_point(curve[idx1:idx2 + 1]) + idx1 pt.plot(curve) pt.plot(idx1, curve[idx1], ‘bo’) pt.plot(idx2, curve[idx2], ‘ro’) pt.plot(idx3, curve[idx3], ‘go’)

rrr(verbose=True)

special class reloading function

run_inference(model, thresh=None, n_labels=None, n_iter=5, algorithm=u'expansion')[source]
run_inference2(model, max_labels=5)[source]
total_energy
update_graph(model)[source]

ibeis.algo.hots.graph_tmp module

class ibeis.algo.hots.graph_tmp.AnnotInference(infr, qreq_, cm_list, user_feedback=None)[source]

Bases: object

Make name inferences about a series of AnnotMatches

CommandLine:

python -m ibeis.algo.hots.graph_iden AnnotInference --show --no-cnn

Example

>>> from ibeis.algo.hots.graph_iden import *  # NOQA
>>> import ibeis
>>> #qreq_ = ibeis.testdata_qreq_(default_qaids=[1, 2, 3, 4], default_daids=[2, 3, 4, 5, 6, 7, 8, 9, 10])
>>> # a='default:dsize=20,excluderef=True,qnum_names=5,min_pername=3,qsample_per_name=1,dsample_per_name=2',
>>> a='default:dsize=20,excluderef=True,qnum_names=5,qsize=1,min_pername=3,qsample_per_name=1,dsample_per_name=2'
>>> qreq_ = ibeis.testdata_qreq_(defaultdb='PZ_MTEST', a=a, verbose=0, use_cache=False)
>>> # a='default:dsize=2,qsize=1,excluderef=True,qnum_names=5,min_pername=3,qsample_per_name=1,dsample_per_name=2',
>>> ibs = qreq_.ibs
>>> cm_list = qreq_.execute()
>>> self1 = AnnotInference(qreq_, cm_list)
>>> inf_dict1 = self1.make_annot_inference_dict(True)
>>> user_feedback =  self1.simulate_user_feedback()
>>> self2 = AnnotInference(qreq_, cm_list, user_feedback)
>>> inf_dict2 = self2.make_annot_inference_dict(True)
>>> print('inference_dict = ' + ut.repr3(inf_dict1, nl=3))
>>> print('inference_dict2 = ' + ut.repr3(inf_dict2, nl=3))
>>> ut.quit_if_noshow()
>>> graph1 = self1.make_graph(show=True)
>>> graph2 = self2.make_graph(show=True)
>>> ut.show_if_requested()
choose_thresh(infr)[source]
make_annot_inference_dict(infr, internal=False)[source]
make_clusters(infr)[source]
make_graph(infr, show=False)[source]
make_inference(infr)[source]
make_prob_annots(infr)[source]
make_prob_names(infr)[source]
rrr(verbose=True)

special class reloading function

simulate_user_feedback(infr)[source]
class ibeis.algo.hots.graph_tmp.AnnotInference2(ibs, aids, nids, current_nids=None)[source]

Bases: object

construct_graph2(infr)[source]
exec_split_check()[source]
infer_cut(infr)[source]
initialize_graph()[source]
initialize_graph_and_model(infr)[source]

Unused in internal split stuff

pt.qt4ensure() layout_info = pt.show_nx(graph, as_directed=False, fnum=1,

layoutkw=dict(prog=’neato’), use_image=True, verbose=0)

ax = pt.gca() pt.zoom_factory() pt.interactions.PanEvents()

rrr(verbose=True)

special class reloading function

class ibeis.algo.hots.graph_tmp.InfrModel(model, graph)[source]

Bases: utool.util_dev.NiceRepr

estimate_threshold(model, method=None)[source]

import plottool as pt idx3 = vt.find_elbow_point(curve[idx1:idx2 + 1]) + idx1 pt.plot(curve) pt.plot(idx1, curve[idx1], ‘bo’) pt.plot(idx2, curve[idx2], ‘ro’) pt.plot(idx3, curve[idx3], ‘go’)

rrr(verbose=True)

special class reloading function

run_inference(model, thresh=None, n_labels=None, n_iter=5, algorithm=u'expansion')[source]
run_inference2(model, max_labels=5)[source]
total_energy
update_graph(model)[source]

ibeis.algo.hots.hstypes module

hstypes Todo: * SIFT: Root_SIFT -> L2 normalized -> Centering. # http://hal.archives-ouvertes.fr/docs/00/84/07/21/PDF/RR-8325.pdf The devil is in the deatails http://www.robots.ox.ac.uk/~vilem/bmvc2011.pdf This says dont clip, do rootsift instead # http://hal.archives-ouvertes.fr/docs/00/68/81/69/PDF/hal_v1.pdf * Quantization of residual vectors * Burstiness normalization for N-SMK * Implemented A-SMK * Incorporate Spatial Verification * Implement correct cfgstrs based on algorithm input for cached computations. * Color by word * Profile on hyrule * Train vocab on paris * Remove self matches. * New SIFT parameters for pyhesaff (root, powerlaw, meanwhatever, output_dtype)

TODO:
This needs to be less constant when using non-sift descriptors

Issues: * 10GB are in use when performing query on Oxford 5K * errors when there is a word without any database vectors. currently a weight of zero is hacked in

class ibeis.algo.hots.hstypes.FiltKeys[source]

Bases: object

BARL2 = 'bar_l2'
DIST = 'dist'
DISTINCTIVENESS = 'distinctiveness'
FG = 'fg'
HOMOGERR = 'homogerr'
LNBNN = 'lnbnn'
RATIO = 'ratio'
ibeis.algo.hots.hstypes.PSEUDO_UINT8_MAX_SQRD = 262144.0

SeeAlso – vt.distance.understanding_pseudomax_props

ibeis.algo.hots.match_chips4 module

Runs functions in pipeline to get query reuslts and does some caching.

ibeis.algo.hots.match_chips4.empty_query(ibs, qaids)[source]

Hack to give an empty query a query result object

Parameters:
  • ibs (ibeis.IBEISController) – ibeis controller object
  • qaids (list) –
Returns:

(qaid2_cm, qreq_)

Return type:

tuple

CommandLine:

python -m ibeis.algo.hots.match_chips4 --test-empty_query
python -m ibeis.algo.hots.match_chips4 --test-empty_query --show

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.match_chips4 import *  # NOQA
>>> import ibeis
>>> ibs = ibeis.opendb('testdb1')
>>> qaids = ibs.get_valid_aids(species=ibeis.const.TEST_SPECIES.ZEB_PLAIN)
>>> # execute function
>>> (qaid2_cm, qreq_) = empty_query(ibs, qaids)
>>> # verify results
>>> result = str((qaid2_cm, qreq_))
>>> print(result)
>>> cm = qaid2_cm[qaids[0]]
>>> ut.assert_eq(len(cm.get_top_aids()), 0)
>>> ut.quit_if_noshow()
>>> cm.ishow_top(ibs, update=True, make_figtitle=True, show_query=True, sidebyside=False)
>>> from matplotlib import pyplot as plt
>>> plt.show()
ibeis.algo.hots.match_chips4.execute_query2(ibs, qreq_, verbose, save_qcache, batch_size=None)[source]

Breaks up query request into several subrequests to process “more efficiently” and safer as well.

ibeis.algo.hots.match_chips4.execute_query_and_save_L1(ibs, qreq_, use_cache, save_qcache, verbose=True, batch_size=None)[source]
Parameters:
  • ibs (ibeis.IBEISController) –
  • qreq (ibeis.QueryRequest) –
  • use_cache (bool) –
Returns:

qaid2_cm

CommandLine:

python -m ibeis.algo.hots.match_chips4 --test-execute_query_and_save_L1:0
python -m ibeis.algo.hots.match_chips4 --test-execute_query_and_save_L1:1
python -m ibeis.algo.hots.match_chips4 --test-execute_query_and_save_L1:2
python -m ibeis.algo.hots.match_chips4 --test-execute_query_and_save_L1:3
Example0:
>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.match_chips4 import *  # NOQA
>>> cfgdict1 = dict(codename='vsmany', sv_on=True)
>>> p = 'default' + ut.get_cfg_lbl(cfgdict1)
>>> qreq_ = ibeis.main_helpers.testdata_qreq_(p=p, qaid_override=[1, 2, 3, 4)
>>> ibs = qreq_.ibs
>>> use_cache, save_qcache, verbose = False, False, True
>>> qaid2_cm = execute_query_and_save_L1(ibs, qreq_, use_cache, save_qcache, verbose)
>>> print(qaid2_cm)
Example1:
>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.match_chips4 import *  # NOQA
>>> cfgdict1 = dict(codename='vsone', sv_on=True)
>>> p = 'default' + ut.get_cfg_lbl(cfgdict1)
>>> qreq_ = ibeis.main_helpers.testdata_qreq_(p=p, qaid_override=[1, 2, 3, 4)
>>> ibs = qreq_.ibs
>>> use_cache, save_qcache, verbose = False, False, True
>>> qaid2_cm = execute_query_and_save_L1(ibs, qreq_, use_cache, save_qcache, verbose)
>>> print(qaid2_cm)
Example1:
>>> # SLOW_DOCTEST
>>> # TEST SAVE
>>> from ibeis.algo.hots.match_chips4 import *  # NOQA
>>> import ibeis
>>> cfgdict1 = dict(codename='vsmany', sv_on=True)
>>> p = 'default' + ut.get_cfg_lbl(cfgdict1)
>>> qreq_ = ibeis.main_helpers.testdata_qreq_(p=p, qaid_override=[1, 2, 3, 4)
>>> ibs = qreq_.ibs
>>> use_cache, save_qcache, verbose = False, True, True
>>> qaid2_cm = execute_query_and_save_L1(ibs, qreq_, use_cache, save_qcache, verbose)
>>> print(qaid2_cm)
Example2:
>>> # SLOW_DOCTEST
>>> # TEST LOAD
>>> from ibeis.algo.hots.match_chips4 import *  # NOQA
>>> import ibeis
>>> cfgdict1 = dict(codename='vsmany', sv_on=True)
>>> p = 'default' + ut.get_cfg_lbl(cfgdict1)
>>> qreq_ = ibeis.main_helpers.testdata_qreq_(p=p, qaid_override=[1, 2, 3, 4)
>>> ibs = qreq_.ibs
>>> use_cache, save_qcache, verbose = True, True, True
>>> qaid2_cm = execute_query_and_save_L1(ibs, qreq_, use_cache, save_qcache, verbose)
>>> print(qaid2_cm)
Example2:
>>> # ENABLE_DOCTEST
>>> # TEST PARTIAL HIT
>>> from ibeis.algo.hots.match_chips4 import *  # NOQA
>>> import ibeis
>>> cfgdict1 = dict(codename='vsmany', sv_on=False, prescore_method='csum')
>>> p = 'default' + ut.get_cfg_lbl(cfgdict1)
>>> qreq_ = ibeis.main_helpers.testdata_qreq_(p=p, qaid_override=[1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> ibs = qreq_.ibs
>>> use_cache, save_qcache, verbose = False, True, False
>>> qaid2_cm = execute_query_and_save_L1(ibs, qreq_, use_cache, save_qcache, verbose, batch_size=3)
>>> cm = qaid2_cm[1]
>>> ut.delete(cm.get_fpath(qreq_))
>>> cm = qaid2_cm[4]
>>> ut.delete(cm.get_fpath(qreq_))
>>> cm = qaid2_cm[5]
>>> ut.delete(cm.get_fpath(qreq_))
>>> cm = qaid2_cm[6]
>>> ut.delete(cm.get_fpath(qreq_))
>>> print('Re-execute')
>>> qaid2_cm_ = execute_query_and_save_L1(ibs, qreq_, use_cache, save_qcache, verbose, batch_size=3)
>>> assert all([qaid2_cm_[qaid] == qaid2_cm[qaid] for qaid in qreq_.qaids])
>>> [ut.delete(fpath) for fpath in qreq_.get_chipmatch_fpaths(qreq_.qaids)]
ibeis.algo.hots.match_chips4.submit_query_request(ibs, qaid_list, daid_list, use_cache=None, use_bigcache=None, cfgdict=None, qreq_=None, verbose=None, save_qcache=None, prog_hook=None)[source]

The standard query interface.

TODO: rename use_cache to use_qcache

Checks a big cache for qaid2_cm. If cache miss, tries to load each cm individually. On an individual cache miss, it preforms the query.

Parameters:
  • ibs (ibeis.IBEISController) – ibeis control object
  • qaid_list (list) – query annotation ids
  • daid_list (list) – database annotation ids
  • use_cache (bool) –
  • use_bigcache (bool) –
Returns:

qaid2_cm – dict of QueryResult objects

Return type:

dict

CommandLine:

python -m ibeis.algo.hots.match_chips4 --test-submit_query_request

Examples

>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.match_chips4 import *  # NOQA
>>> import ibeis
>>> qaid_list = [1]
>>> daid_list = [1, 2, 3, 4, 5]
>>> use_bigcache = True
>>> use_cache = True
>>> ibs = ibeis.opendb(db='testdb1')
>>> qreq_ = ibs.new_query_request(qaid_list, daid_list, cfgdict={}, verbose=True)
>>> qaid2_cm = submit_query_request(ibs, qaid_list, daid_list, use_cache, use_bigcache, qreq_=qreq_)
ibeis.algo.hots.match_chips4.submit_query_request_nocache(ibs, qreq_, verbose=False)[source]

depricate

ibeis.algo.hots.multi_index module

module which uses multiple flann indexes as a way of working around adding points to a single flann structure which seems to cause crashes.

class ibeis.algo.hots.multi_index.MultiNeighborIndex(mxer, nn_indexer_list, min_reindex_thresh=10, max_subindexers=2)[source]

Bases: object

TODO: rename to DistributedNeighborIndex

Generalization of a NeighborIndex More abstract wrapper around flann

Example

>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.multi_index import *  # NOQA
>>> mxer, qreq_, ibs = testdata_mindexer()
add_ibeis_support(mxer, qreq_, new_aid_list)[source]

Chooses indexer with smallest number of annotations and reindexes it.

Parameters:
  • qreq (QueryRequest) – query request object with hyper-parameters
  • new_aid_list (list) –

CommandLine:

python -m ibeis.algo.hots.multi_index --test-add_ibeis_support

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.multi_index import *  # NOQA
>>> mxer, qreq_, ibs = testdata_mindexer()
>>> new_aid_list = ibs.get_valid_aids()[70:80]
>>> # execute function
>>> result = mxer.add_ibeis_support(qreq_, new_aid_list)
>>> # verify results
>>> print(result)
add_support(mxer, new_aid_list, new_vecs_list, new_fgws_list, verbose=True)[source]

Chooses indexer with smallest number of annotations and reindexes it.

assert_can_add_aids(mxer, new_aid_list)[source]

Aids that are already indexed should never be added.

get_dtype(mxer)[source]
get_indexed_aids(mxer)[source]
get_multi_indexed_aids(mxer)[source]
get_multi_num_indexed_annots(mxer)[source]
get_nIndexed_list(mxer)[source]

returns a list of the number of indexed vectors in each subindexer

Args:

Returns:nIndexed_list
Return type:list

CommandLine:

python -m ibeis.algo.hots.multi_index --test-get_nIndexed_list

Example

>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.multi_index import *  # NOQA
>>> mxer, qreq_, ibs = testdata_mindexer()
>>> nIndexed_list = mxer.get_nIndexed_list()
>>> target = np.array([21384, 15243, 12808, 4809, 3542, 2696])
>>> error = ut.assert_almost_eq(nIndexed_list, target, 100)
>>> print('error.max() = %r' % (error.max(),))
>>> #np.all(ut.inbounds(nIndexed_list, low, high))
get_nn_aids(mxer, qfx2_imx)[source]
Parameters:qfx2_imx (ndarray) –
Returns:qfx2_aid
Return type:ndarray

CommandLine:

python -m ibeis.algo.hots.multi_index --test-get_nn_aids

Example

>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.multi_index import *  # NOQA
>>> import numpy as np
>>> mxer, qreq_, ibs = testdata_mindexer()
>>> K = 3
>>> qaid = 1
>>> qfx2_vec = ibs.get_annot_vecs(qaid, config2_=qreq_.get_internal_query_config2())
>>> (qfx2_imx, qfx2_dist) = mxer.knn(qfx2_vec, K)
>>> qfx2_aid = mxer.get_nn_aids(qfx2_imx)
>>> gt_aids = ibs.get_annot_groundtruth(qaid)
>>> result = np.array_str(qfx2_aid[0:2])
>>> # Make sure there are lots (like 5%) of correct matches
>>> mask_cover = vt.get_covered_mask(qfx2_aid, gt_aids)
>>> num_correct   = mask_cover.sum()
>>> num_incorrect = (~mask_cover).sum()
>>> print('fraction correct = %r' % (num_correct / float(num_incorrect),))
>>> ut.assert_inbounds(num_correct, 900, 1100,
...                    'not enough matches to groundtruth')
get_nn_featxs(mxer, qfx2_imx)[source]
Parameters:qfx2_imx (ndarray) –
Returns:qfx2_fx
Return type:ndarray

CommandLine:

python -m ibeis.algo.hots.multi_index --test-get_nn_featxs

Example

>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.multi_index import *  # NOQA
>>> import numpy as np
>>> mxer, qreq_, ibs = testdata_mindexer()
>>> K = 3
>>> qfx2_vec = ibs.get_annot_vecs(1, config2_=qreq_.get_internal_query_config2())
>>> (qfx2_imx, qfx2_dist) = mxer.knn(qfx2_vec, K)
>>> qfx2_fgw = mxer.get_nn_featxs(qfx2_imx)
>>> result = np.array_str(qfx2_fgw)
>>> print(result)
get_nn_fgws(mxer, qfx2_imx)[source]
Parameters:qfx2_imx (ndarray) –
Returns:qfx2_fgw
Return type:ndarray

CommandLine:

python -m ibeis.algo.hots.multi_index --test-get_nn_fgws

Example

>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.multi_index import *  # NOQA
>>> import numpy as np
>>> mxer, qreq_, ibs = testdata_mindexer()
>>> K = 3
>>> qfx2_vec = ibs.get_annot_vecs(1, config2_=qreq_.get_internal_query_config2())
>>> (qfx2_imx, qfx2_dist) = mxer.knn(qfx2_vec, K)
>>> qfx2_fgw = mxer.get_nn_fgws(qfx2_imx)
>>> result = np.array_str(qfx2_fgw)
>>> print(result)
get_offsets(mxer)[source]
Returns:
Return type:list

CommandLine:

python -m ibeis.algo.hots.multi_index --test-get_offsets

Example

>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.multi_index import *  # NOQA
>>> mxer, qreq_, ibs = testdata_mindexer()
>>> offset_list = mxer.get_offsets()
>>> #target = np.array([15257, 12769,  4819,  3542,  2694])
>>> target = np.array([21384, 36627, 49435, 54244, 57786, 60482])
>>> error = ut.assert_almost_eq(offset_list, target, 100)
>>> print('error.max() = %r' % (error.max(),))
iter_subindexers(mxer, qfx2_imx)[source]

generates subindexers, indices, and maskss within them that partially correspond to indices in qfx2_imx that belong to that subindexer

Parameters:qfx2_imx (ndarray) –

CommandLine:

python -m ibeis.algo.hots.multi_index --test-iter_subindexers

Example

>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.multi_index import *  # NOQA
>>> mxer, qreq_, ibs = testdata_mindexer()
>>> K = 3, 1028
>>> qfx2_vec = ibs.get_annot_vecs(1, config2_=qreq_.get_internal_query_config2())
>>> (qfx2_imx, qfx2_dist) = mxer.knn(qfx2_vec, K)
>>> genlist_ = list(mxer.iter_subindexers(qfx2_imx))
>>> covered = np.zeros(qfx2_imx.shape)
>>> for nnindexer, idxs, mask in genlist_:
...     print(covered.sum())
...     assert idxs.size == mask.sum()
...     assert covered[mask].sum() == 0
...     covered[mask] = True
>>> print(covered.sum())
>>> assert covered.sum() == covered.size
knn(mxer, qfx2_vec, K)[source]

Polymorphic interface to knn, but uses the multindex backend

CommandLine:

python -m ibeis.algo.hots.multi_index --test-knn:0
Example1:
>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.multi_index import *  # NOQA
>>> import numpy as np
>>> mxer, qreq_, ibs = testdata_mindexer()
>>> K = 3
>>> qfx2_vec = ibs.get_annot_vecs(1, config2_=qreq_.get_internal_query_config2())
>>> (qfx2_imx, qfx2_dist) = mxer.knn(qfx2_vec, K)
>>> print(qfx2_imx.shape)
>>> assert qfx2_imx.shape[1] == 18
>>> ut.assert_inbounds(qfx2_imx.shape[0], 1073, 1079)
Example2:
>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.multi_index import *  # NOQA
>>> mxer, qreq_, ibs = testdata_mindexer()
>>> K = 3
>>> qfx2_vec = np.empty((0, 128), dtype=mxer.get_dtype())
>>> (qfx2_imx, qfx2_dist) = mxer.knn(qfx2_vec, K)
>>> result = str(np.shape(qfx2_imx))
>>> print(result)
(0, 18)
knn2(mxer, qfx2_vec, K)[source]

Example

>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.multi_index import *  # NOQA
>>> mxer, qreq_, ibs = testdata_mindexer()
>>> K = 3
>>> qfx2_vec = ibs.get_annot_vecs(1, config2_=qreq_.get_internal_query_config2())
>>> (qfx2_dist_, qfx2_idx_,  qfx2_fx_, qfx2_ax_, qfx2_rankx_, qfx2_treex_,) = mxer.knn2(qfx2_vec, K)
multi_knn(mxer, qfx2_vec, K)[source]

Does a query on each of the subindexer kdtrees returns list of the results

Example

>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.multi_index import *  # NOQA
>>> import numpy as np
>>> mxer, qreq_, ibs = testdata_mindexer()
>>> K = 3
>>> qfx2_vec = ibs.get_annot_vecs(1, config2_=qreq_.get_internal_query_config2())
>>> (qfx2_idx_list, qfx2_dist_list) = mxer.multi_knn(qfx2_vec, K)
>>> shape_list = list(map(np.shape, qfx2_idx_list))
>>> d1_list = ut.get_list_column(shape_list, 0)
>>> d2_list = ut.get_list_column(shape_list, 1)
>>> ut.assert_eq(d2_list, [3] * 6)
>>> ut.assert_eq(d1_list, [len(qfx2_vec)] * 6)
num_indexed_annots(mxer)[source]

Example

>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.multi_index import *  # NOQA
>>> mxer, qreq_, ibs = testdata_mindexer()
>>> result = mxer.num_indexed_annots()
>>> print(result)
59
num_indexed_vecs(mxer)[source]

Example

>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.multi_index import *  # NOQA
>>> mxer, qreq_, ibs = testdata_mindexer()
>>> num_indexed = mxer.num_indexed_vecs()
>>> ut.assert_inbounds(num_indexed, 60300, 60500)
rrr(verbose=True)

special class reloading function

ibeis.algo.hots.multi_index.group_daids_for_indexing_by_name(ibs, daid_list, num_indexers=8, verbose=True)[source]

returns groups with only one annotation per name in each group

ibeis.algo.hots.multi_index.request_ibeis_mindexer(qreq_, index_method='multi', verbose=True)[source]

CommandLine:

python -m ibeis.algo.hots.multi_index --test-request_ibeis_mindexer:2
Example0:
>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.multi_index import *  # NOQA
>>> import ibeis
>>> ibs = ibeis.opendb(db='PZ_MTEST')
>>> valid_aids = ibs.get_valid_aids()
>>> daid_list = valid_aids[1:60]
>>> cfgdict = dict(fg_on=False)
>>> qreq_ = ibs.new_query_request(daid_list, daid_list, cfgdict=cfgdict)
>>> index_method = 'multi'
>>> mxer = request_ibeis_mindexer(qreq_, index_method)
Example1:
>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.multi_index import *  # NOQA
>>> import ibeis
>>> ibs = ibeis.opendb(db='PZ_Master0')
>>> valid_aids = ibs.get_valid_aids()
>>> daid_list = valid_aids[1:60]
>>> cfgdict = dict(fg_on=False)
>>> qreq_ = ibs.new_query_request(daid_list, daid_list, cfgdict=cfgdict)
>>> index_method = 'multi'
>>> mxer = request_ibeis_mindexer(qreq_, index_method)
Example2:
>>> # DISABLE_DOCTEST
>>> # Test background reindex
>>> from ibeis.algo.hots.multi_index import *  # NOQA
>>> import ibeis
>>> import time
>>> ibs = ibeis.opendb(db='PZ_MTEST')
>>> valid_aids = ibs.get_valid_aids()
>>> # Remove all cached nnindexers
>>> ibs.delete_flann_cachedir()
>>> # This request should build a new nnindexer
>>> daid_list = valid_aids[1:30]
>>> cfgdict = dict(fg_on=False)
>>> qreq_ = ibs.new_query_request(daid_list, daid_list, cfgdict=cfgdict)
>>> index_method = 'multi'
>>> mxer = request_ibeis_mindexer(qreq_, index_method)
>>> ut.assert_eq(len(mxer.nn_indexer_list), 1, 'one subindexer')
>>> # The next request should trigger a background process
>>> # and build two subindexer
>>> daid_list = valid_aids[1:60]
>>> qreq_ = ibs.new_query_request(daid_list, daid_list, cfgdict=cfgdict)
>>> index_method = 'multi'
>>> mxer = request_ibeis_mindexer(qreq_, index_method)
>>> # Do some work in the foreground to ensure that it doesnt block
>>> # the background job
>>> print('[FG] sleeping or doing bit compute')
>>> # Takes about 15 seconds
>>> with ut.Timer():
...     ut.enumerate_primes(int(9E4))
>>> #time.sleep(10)
>>> print('[FG] done sleeping')
>>> ut.assert_eq(len(mxer.nn_indexer_list), 2, 'two subindexer')
>>> # And this shoud build just one subindexer
>>> daid_list = valid_aids[1:60]
>>> qreq_ = ibs.new_query_request(daid_list, daid_list, cfgdict=cfgdict)
>>> index_method = 'multi'
>>> mxer = request_ibeis_mindexer(qreq_, index_method)
>>> ut.assert_eq(len(mxer.nn_indexer_list), 1, 'one big subindexer')
ibeis.algo.hots.multi_index.sort_along_rows(qfx2_xxx, qfx2_sortx)[source]

sorts each row in qfx2_xxx with the corresponding row in qfx2_sortx

ibeis.algo.hots.multi_index.testdata_mindexer()[source]

ibeis.algo.hots.name_scoring module

class ibeis.algo.hots.name_scoring.NameScoreTup(sorted_nids, sorted_nscore, sorted_aids, sorted_scores)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

sorted_aids

Alias for field number 2

sorted_nids

Alias for field number 0

sorted_nscore

Alias for field number 1

sorted_scores

Alias for field number 3

ibeis.algo.hots.name_scoring.align_name_scores_with_annots(annot_score_list, annot_aid_list, daid2_idx, name_groupxs, name_score_list)[source]

takes name scores and gives them to the best annotation

Returns:

list of scores aligned with cm.daid_list and cm.dnid_list

Return type:

score_list

Parameters:
  • annot_score_list (list) – score associated with each annot
  • name_groupxs (list) – groups annot_score lists into groups compatible with name_score_list
  • name_score_list (list) – score assocated with name
  • nid2_nidx (dict) – mapping from nids to index in name score list

CommandLine:

python -m ibeis.algo.hots.name_scoring --test-align_name_scores_with_annots
python -m ibeis.algo.hots.name_scoring --test-align_name_scores_with_annots --show

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.name_scoring import *  # NOQA
>>> #ibs, qreq_, cm_list = plh.testdata_pre_sver('PZ_MTEST', qaid_list=[18])
>>> ibs, qreq_, cm_list = plh.testdata_post_sver('PZ_MTEST', qaid_list=[18])
>>> cm = cm_list[0]
>>> cm.evaluate_csum_score(qreq_)
>>> cm.evaluate_nsum_score(qreq_)
>>> # Annot aligned lists
>>> annot_score_list = cm.algo_annot_scores['csum']
>>> annot_aid_list   = cm.daid_list
>>> daid2_idx        = cm.daid2_idx
>>> # Name aligned lists
>>> name_score_list  = cm.algo_name_scores['nsum']
>>> name_groupxs     = cm.name_groupxs
>>> # Execute Function
>>> score_list = align_name_scores_with_annots(annot_score_list, annot_aid_list, daid2_idx, name_groupxs, name_score_list)
>>> # Check that the correct name gets the highest score
>>> target = name_score_list[cm.nid2_nidx[cm.qnid]]
>>> test_index = np.where(score_list == target)[0][0]
>>> cm.score_list = score_list
>>> ut.assert_eq(ibs.get_annot_name_rowids(cm.daid_list[test_index]), cm.qnid)
>>> assert ut.isunique(cm.dnid_list[score_list > 0]), 'bad name score'
>>> assert cm.get_top_nids()[0] == cm.unique_nids[cm.nsum_score_list.argmax()], 'bug in alignment'
>>> ut.quit_if_noshow()
>>> cm.show_ranked_matches(qreq_)
>>> ut.show_if_requested()

Example

>>> from ibeis.algo.hots.name_scoring import *  # NOQA
>>> annot_score_list = []
>>> annot_aid_list   = []
>>> daid2_idx        = {}
>>> # Name aligned lists
>>> name_score_list  = np.array([], dtype=np.float32)
>>> name_groupxs     = []
>>> # Execute Function
>>> score_list = align_name_scores_with_annots(annot_score_list, annot_aid_list, daid2_idx, name_groupxs, name_score_list)
ibeis.algo.hots.name_scoring.compute_nsum_score(cm, qreq_=None)[source]

nsum

Parameters:cm (ibeis.ChipMatch) –
Returns:(unique_nids, nsum_score_list)
Return type:tuple

CommandLine:

python -m ibeis.algo.hots.name_scoring --test-compute_nsum_score
python -m ibeis.algo.hots.name_scoring --test-compute_nsum_score:0
python -m ibeis.algo.hots.name_scoring --test-compute_nsum_score:2
utprof.py -m ibeis.algo.hots.name_scoring --test-compute_nsum_score:2
utprof.py -m ibeis.algo.hots.pipeline --test-request_ibeis_query_L0:0 --db PZ_Master1 -a timectrl:qindex=0:256
Example0:
>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.name_scoring import *  # NOQA
>>> # build test data
>>> cm = testdata_chipmatch()
>>> # execute function
>>> (unique_nids, nsum_score_list) = compute_nsum_score(cm)
>>> result = ut.list_str((unique_nids, nsum_score_list), label_list=['unique_nids', 'nsum_score_list'], with_dtype=False)
>>> print(result)
unique_nids = np.array([1, 2, 3])
nsum_score_list = np.array([ 4.,  7.,  5.])
Example1:
>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.name_scoring import *  # NOQA
>>> #ibs, qreq_, cm_list = plh.testdata_pre_sver('testdb1', qaid_list=[1])
>>> ibs, qreq_, cm_list = plh.testdata_post_sver('PZ_MTEST', qaid_list=[18])
>>> cm = cm_list[0]
>>> cm.evaluate_dnids(qreq_.ibs)
>>> cm._cast_scores()
>>> #cm.qnid = 1   # Hack for testdb1 names
>>> nsum_nid_list, nsum_score_list = compute_nsum_score(cm, qreq_)
>>> assert np.all(nsum_nid_list == cm.unique_nids), 'nids out of alignment'
>>> flags = (nsum_nid_list == cm.qnid)
>>> max_true = nsum_score_list[flags].max()
>>> max_false = nsum_score_list[~flags].max()
>>> assert max_true > max_false, 'is this truely a hard case?'
>>> assert max_true > 1.2, 'score=%r should be higher for aid=18' % (max_true,)
>>> nsum_nid_list2, nsum_score_list2, _ = compute_nsum_score2(cm, qreq_)
>>> assert np.allclose(nsum_score_list2, nsum_score_list), 'something is very wrong'
>>> #assert np.all(nsum_score_list2 == nsum_score_list), 'could be a percision issue'
Example2:
>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.name_scoring import *  # NOQA
>>> #ibs, qreq_, cm_list = plh.testdata_pre_sver('testdb1', qaid_list=[1])
>>> ibs, qreq_, cm_list = plh.testdata_post_sver('PZ_MTEST', qaid_list=[18], cfgdict=dict(augment_queryside_hack=True))
>>> cm = cm_list[0]
>>> cm.score_nsum(qreq_)
>>> #cm.evaluate_dnids(qreq_.ibs)
>>> #cm.qnid = 1   # Hack for testdb1 names
>>> #nsum_nid_list, nsum_score_list = compute_nsum_score(cm, qreq_=qreq_)
>>> ut.quit_if_noshow()
>>> cm.show_ranked_matches(qreq_, ori=True)
Example3:
>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.name_scoring import *  # NOQA
>>> #ibs, qreq_, cm_list = plh.testdata_pre_sver('testdb1', qaid_list=[1])
>>> ibs, qreq_, cm_list = plh.testdata_post_sver('testdb1', qaid_list=[1], cfgdict=dict(augment_queryside_hack=True))
>>> cm = cm_list[0]
>>> cm.score_nsum(qreq_)
>>> #cm.evaluate_dnids(qreq_.ibs)
>>> #cm.qnid = 1   # Hack for testdb1 names
>>> #nsum_nid_list, nsum_score_list = compute_nsum_score(cm, qreq_=qreq_)
>>> ut.quit_if_noshow()
>>> cm.show_ranked_matches(qreq_, ori=True)
Example4:
>>> # ENABLE_DOCTEST
>>> # FIXME: breaks when fg_on=True
>>> from ibeis.algo.hots.name_scoring import *  # NOQA
>>> from ibeis.algo.hots import name_scoring
>>> from ibeis.algo.hots import scoring
>>> import ibeis
>>> # Test to make sure name score and chips score are equal when per_name=1
>>> qreq_, args = plh.testdata_pre(
>>>     'spatial_verification', defaultdb='PZ_MTEST',
>>>     a=['default:dpername=1,qsize=1,dsize=10'],
>>>     p=['default:K=1,fg_on=True,sqrd_dist_on=True'])
>>> cm = args.cm_list_FILT[0]
>>> ibs = qreq_.ibs
>>> # Ensure there is only one aid per database name
>>> assert isinstance(ibs, ibeis.control.IBEISControl.IBEISController)
>>> #stats_dict = ibs.get_annot_stats_dict(qreq_.get_external_daids(), prefix='d')
>>> #stats = stats_dict['dper_name']
>>> stats = ibs.get_annot_per_name_stats(qreq_.get_external_daids())
>>> print('per_name_stats = %s' % (ut.dict_str(stats, nl=False),))
>>> assert stats['mean'] == 1 and stats['std'] == 0, 'this test requires one annot per name in the database'
>>> cm.evaluate_dnids(qreq_.ibs)
>>> cm.assert_self(qreq_)
>>> cm._cast_scores()
>>> # cm.fs_list = cm.fs_list.astype(np.float)
>>> nsum_nid_list, nsum_score_list = name_scoring.compute_nsum_score(cm, qreq_)
>>> nsum_nid_list2, nsum_score_list2, _ = name_scoring.compute_nsum_score2(cm, qreq_)
>>> csum_score_list = scoring.compute_csum_score(cm)
>>> vt.asserteq(nsum_score_list, csum_score_list)
>>> vt.asserteq(nsum_score_list, csum_score_list, thresh=0, iswarning=True)
>>> vt.asserteq(nsum_score_list2, csum_score_list, thresh=0, iswarning=True)
>>> #assert np.allclose(nsum_score_list, csum_score_list), 'should be the same when K=1 and per_name=1'
>>> #assert all(nsum_score_list  == csum_score_list), 'should be the same when K=1 and per_name=1'
>>> #assert all(nsum_score_list2 == csum_score_list), 'should be the same when K=1 and per_name=1'
>>> # Evaluate parts of the sourcecode
ibeis.algo.hots.name_scoring.compute_nsum_score2(cm, qreq_=None)[source]
Example3:
>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.name_scoring import *  # NOQA
>>> #ibs, qreq_, cm_list = plh.testdata_pre_sver('testdb1', qaid_list=[1])
>>> ibs, qreq_, cm_list = plh.testdata_post_sver('testdb1', qaid_list=[1], cfgdict=dict(fg_on=False, augment_queryside_hack=True))
>>> cm = cm_list[0]
>>> cm.evaluate_dnids(qreq_.ibs)
>>> nsum_nid_list1, nsum_score_list1, featflag_list1 = compute_nsum_score2(cm, qreq_)
>>> nsum_nid_list2, nsum_score_list2 = compute_nsum_score(cm, qreq_)
>>> ut.quit_if_noshow()
>>> cm.show_ranked_matches(qreq_, ori=True)
ibeis.algo.hots.name_scoring.get_chipmatch_namescore_nonvoting_feature_flags(cm, qreq_=None)[source]

Computes flags to desribe which features can or can not vote

CommandLine:

python -m ibeis.algo.hots.name_scoring --exec-get_chipmatch_namescore_nonvoting_feature_flags

Example

>>> # ENABLE_DOCTEST
>>> # FIXME: breaks when fg_on=True
>>> from ibeis.algo.hots.name_scoring import *  # NOQA
>>> from ibeis.algo.hots import name_scoring
>>> # Test to make sure name score and chips score are equal when per_name=1
>>> qreq_, args = plh.testdata_pre('spatial_verification', defaultdb='PZ_MTEST', a=['default:dpername=1,qsize=1,dsize=10'], p=['default:K=1,fg_on=True'])
>>> cm_list = args.cm_list_FILT
>>> ibs = qreq_.ibs
>>> cm = cm_list[0]
>>> cm.evaluate_dnids(qreq_.ibs)
>>> featflat_list = get_chipmatch_namescore_nonvoting_feature_flags(cm, qreq_)
>>> assert all(list(map(np.all, featflat_list))), 'all features should be able to vote in K=1, per_name=1 case'
ibeis.algo.hots.name_scoring.get_namescore_nonvoting_feature_flags(fm_list, fs_list, dnid_list, name_groupxs, kpts1=None)[source]

fm_list = [fm[:min(len(fm), 10)] for fm in fm_list] fs_list = [fs[:min(len(fs), 10)] for fs in fs_list]

ibeis.algo.hots.name_scoring.group_scores_by_name(ibs, aid_list, score_list)[source]

Converts annotation scores to name scores. Over multiple annotations finds keypoints best match and uses that score.

CommandLine:

python -m ibeis.algo.hots.name_scoring --test-group_scores_by_name

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.name_scoring import *   # NOQA
>>> import ibeis
>>> cm, qreq_ = ibeis.testdata_cm('PZ_MTEST')
>>> ibs = qreq_.ibs
>>> #print(cm.get_inspect_str(qreq_))
>>> aid_list = cm.daid_list
>>> score_list = cm.annot_score_list
>>> nscoretup = group_scores_by_name(ibs, aid_list, score_list)
>>> (sorted_nids, sorted_nscore, sorted_aids, sorted_scores) = nscoretup
>>> ut.assert_eq(sorted_nids[0], cm.qnid)
TODO:

# TODO: this code needs a really good test case #>>> result = np.array_repr(sorted_nids[0:2]) #>>> print(result) #array([1, 5])

Ignore::
# hack in dict of Nones prob for testing import six qres.aid2_prob = {aid:None for aid in six.iterkeys(qres.aid2_score)}

array([ 1, 5, 26]) [2 6 5]

Timeit::
import ibeis ibs = ibeis.opendb(‘PZ_MTEST’) aid_list = ibs.get_valid_aids() aid_arr = np.array(aid_list) %timeit ibs.get_annot_name_rowids(aid_list) %timeit ibs.get_annot_name_rowids(aid_arr)
ibeis.algo.hots.name_scoring.testdata_chipmatch()[source]

ibeis.algo.hots.neighbor_index module

TODO:
Remove Bloat

multi_index.py as well

https://github.com/spotify/annoy

class ibeis.algo.hots.neighbor_index.NeighborIndex(nnindexer, flann_params, cfgstr)[source]

Bases: object

wrapper class around flann stores flann index and data it needs to index into

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.neighbor_index import *  # NOQA
>>> nnindexer, qreq_, ibs = test_nnindexer()
add_ibeis_support(nnindexer, qreq_, new_daid_list, verbose=True)[source]

# TODO: ensure that the memcache changes appropriately

add_support(nnindexer, new_daid_list, new_vecs_list, new_fgws_list, verbose=True)[source]

adds support data (aka data to be indexed)

Parameters:
  • new_daid_list (list) – list of annotation ids that are being added
  • new_vecs_list (list) – list of descriptor vectors for each annotation
  • new_fgws_list (list) – list of weights per vector for each annotation
  • verbose (bool) – verbosity flag(default = True)

CommandLine:

python -m ibeis.algo.hots.neighbor_index --test-add_support

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.neighbor_index import *  # NOQA
>>> nnindexer, qreq_, ibs = test_nnindexer(use_memcache=False)
>>> new_daid_list = [2, 3, 4]
>>> K = 2
>>> qfx2_vec = ibs.get_annot_vecs(1, config2_=qreq_.get_internal_query_config2())
>>> # get before data
>>> (qfx2_idx1, qfx2_dist1) = nnindexer.knn(qfx2_vec, K)
>>> new_vecs_list, new_fgws_list = get_support_data(qreq_, new_daid_list)
>>> # execute test function
>>> nnindexer.add_support(new_daid_list, new_vecs_list, new_fgws_list)
>>> # test before data vs after data
>>> (qfx2_idx2, qfx2_dist2) = nnindexer.knn(qfx2_vec, K)
>>> assert qfx2_idx2.max() > qfx2_idx1.max()
build_and_save(nnindexer, cachedir, verbose=True, memtrack=None)[source]
debug_nnindexer(nnindexer)[source]

Makes sure the indexer has valid SIFT descriptors

empty_neighbors(nnindexer, nQfx, K)[source]
ensure_indexer(nnindexer, cachedir, verbose=True, force_rebuild=False, memtrack=None)[source]

Ensures that you get a neighbor indexer. It either loads a chached indexer or rebuilds a new one.

ext = '.flann'
get_cfgstr(nnindexer, noquery=False)[source]

returns string which uniquely identified configuration and support data

Parameters:noquery (bool) – if True cfgstr is only relevant to building the index. No search params are returned (default = False)
Returns:flann_cfgstr
Return type:str

CommandLine:

python -m ibeis.algo.hots.neighbor_index --test-get_cfgstr

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.neighbor_index import *  # NOQA
>>> import ibeis
>>> cfgdict = dict(fg_on=False)
>>> qreq_ = ibeis.testdata_qreq_(defaultdb='testdb1', p='default:fg_on=False')
>>> qreq_.load_indexer()
>>> nnindexer = qreq_.indexer
>>> noquery = True
>>> flann_cfgstr = nnindexer.get_cfgstr(noquery)
>>> result = ('flann_cfgstr = %s' % (str(flann_cfgstr),))
>>> print(result)
flann_cfgstr = _FLANN((algo=kdtree,seed=42,t=8,))_VECS((11260,128)gj5nea@ni0%f3aja)
get_dtype(nnindexer)[source]
get_fname(nnindexer)[source]
get_fpath(nnindexer, cachedir, cfgstr=None)[source]
get_indexed_aids(nnindexer)[source]
get_indexed_vecs(nnindexer)[source]
get_nn_aids(nnindexer, qfx2_nnidx)[source]
Parameters:qfx2_nnidx – (N x K) qfx2_idx[n][k] is the index of the kth approximate nearest data vector
Returns:(N x K) qfx2_fx[n][k] is the annotation id index of the kth approximate nearest data vector
Return type:qfx2_aid

CommandLine:

python -m ibeis.algo.hots.neighbor_index --exec-get_nn_aids

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.neighbor_index import *  # NOQA
>>> import ibeis
>>> cfgdict = dict(fg_on=False)
>>> qreq_ = ibeis.testdata_qreq_(defaultdb='testdb1', p='default:fg_on=False,dim_size=450,resize_dim=area')
>>> qreq_.load_indexer()
>>> nnindexer = qreq_.indexer
>>> qfx2_vec = qreq_.ibs.get_annot_vecs(
>>>     qreq_.get_internal_qaids()[0],
>>>     config2_=qreq_.get_internal_query_config2())
>>> num_neighbors = 4
>>> (qfx2_nnidx, qfx2_dist) = nnindexer.knn(qfx2_vec, num_neighbors)
>>> qfx2_aid = nnindexer.get_nn_aids(qfx2_nnidx)
>>> assert qfx2_aid.shape[1] == num_neighbors
>>> result = ('qfx2_aid.shape = %r' % (qfx2_aid.shape,))
>>> print(result)
qfx2_aid.shape = (1257, 4)
get_nn_axs(nnindexer, qfx2_nnidx)[source]

gets matching internal annotation indices

get_nn_featxs(nnindexer, qfx2_nnidx)[source]
Parameters:qfx2_nnidx – (N x K) qfx2_idx[n][k] is the index of the kth approximate nearest data vector
Returns:(N x K) qfx2_fx[n][k] is the feature index (w.r.t the source annotation) of the kth approximate nearest data vector
Return type:qfx2_fx
get_nn_fgws(nnindexer, qfx2_nnidx)[source]

Gets forground weights of neighbors

CommandLine:

python -m ibeis --tf NeighborIndex.get_nn_fgws
Parameters:qfx2_nnidx – (N x K) qfx2_idx[n][k] is the index of the kth approximate nearest data vector
Returns:(N x K) qfx2_fgw[n][k] is the annotation id index of the kth forground weight
Return type:qfx2_fgw

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.neighbor_index import *  # NOQA
>>> nnindexer, qreq_, ibs = test_nnindexer(dbname='testdb1')
>>> qfx2_nnidx = np.array([[0, 1, 2], [3, 4, 5]])
>>> qfx2_fgw = nnindexer.get_nn_fgws(qfx2_nnidx)
get_nn_vecs(nnindexer, qfx2_nnidx)[source]

gets matching vectors

get_prefix(nnindexer)[source]
get_removed_idxs(nnindexer)[source]

__removed_ids = nnindexer.flann._FLANN__removed_ids invalid_idxs = nnindexer.get_removed_idxs() assert len(np.intersect1d(invalid_idxs, __removed_ids)) == len(__removed_ids)

init_support(nnindexer, aid_list, vecs_list, fgws_list, verbose=True)[source]

prepares inverted indicies and FLANN data structure

knn(nnindexer, qfx2_vec, K)[source]

Returns the indices and squared distance to the nearest K neighbors. The distance is noramlized between zero and one using VEC_PSEUDO_MAX_DISTANCE = (np.sqrt(2) * VEC_PSEUDO_MAX)

Parameters:
  • qfx2_vec – (N x D) an array of N, D-dimensional query vectors
  • K – number of approximate nearest neighbors to find
Returns: tuple of (qfx2_idx, qfx2_dist)
ndarray : qfx2_idx[n][k] (N x K) is the index of the kth
approximate nearest data vector w.r.t qfx2_vec[n]
ndarray : qfx2_dist[n][k] (N x K) is the distance to the kth
approximate nearest data vector w.r.t. qfx2_vec[n] distance is normalized squared euclidean distance.

CommandLine:

python -m ibeis --tf NeighborIndex.knn:0 --debug2
python -m ibeis --tf NeighborIndex.knn:1

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.neighbor_index import *  # NOQA
>>> nnindexer, qreq_, ibs = test_nnindexer()
>>> qfx2_vec = ibs.get_annot_vecs(1, config2_=qreq_.get_internal_query_config2())
>>> K = 2
>>> nnindexer.debug_nnindexer()
>>> assert vt.check_sift_validity(qfx2_vec), 'bad SIFT properties'
>>> (qfx2_idx, qfx2_dist) = nnindexer.knn(qfx2_vec, K)
>>> result = str(qfx2_idx.shape) + ' ' + str(qfx2_dist.shape)
>>> print('qfx2_vec.dtype = %r' % (qfx2_vec.dtype,))
>>> print('nnindexer.max_distance_sqrd = %r' % (nnindexer.max_distance_sqrd,))
>>> assert np.all(qfx2_dist < 1.0), (
>>>    'distance should be less than 1. got %r' % (qfx2_dist,))
>>> # Ensure distance calculations are correct
>>> qfx2_dvec = nnindexer.idx2_vec[qfx2_idx.T]
>>> targetdist = vt.L2_sift(qfx2_vec, qfx2_dvec).T ** 2
>>> rawdist    = vt.L2_sqrd(qfx2_vec, qfx2_dvec).T
>>> assert np.all(qfx2_dist * nnindexer.max_distance_sqrd == rawdist), (
>>>    'inconsistant distance calculations')
>>> assert np.allclose(targetdist, qfx2_dist), (
>>>    'inconsistant distance calculations')
Example2:
>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.neighbor_index import *  # NOQA
>>> nnindexer, qreq_, ibs = test_nnindexer()
>>> qfx2_vec = np.empty((0, 128), dtype=nnindexer.get_dtype())
>>> K = 2
>>> (qfx2_idx, qfx2_dist) = nnindexer.knn(qfx2_vec, K)
>>> result = str(qfx2_idx.shape) + ' ' + str(qfx2_dist.shape)
>>> print(result)
(0, 2) (0, 2)
load(nnindexer, cachedir=None, fpath=None, verbose=True)[source]

Loads a cached flann neighbor indexer from disk (not the data)

num_indexed_annots(nnindexer)[source]
num_indexed_vecs(nnindexer)[source]
prefix1 = 'flann'
reindex(nnindexer, verbose=True, memtrack=None)[source]

indexes all vectors with FLANN.

remove_ibeis_support(nnindexer, qreq_, remove_daid_list, verbose=True)[source]

# TODO: ensure that the memcache changes appropriately

remove_support(nnindexer, remove_daid_list, verbose=True)[source]

CommandLine:

python -m ibeis.algo.hots.neighbor_index --test-remove_support
SeeAlso:
~/code/flann/src/python/pyflann/index.py

Example

>>> # SLOW_DOCTEST
>>> # (IMPORTANT)
>>> from ibeis.algo.hots.neighbor_index import *  # NOQA
>>> nnindexer, qreq_, ibs = test_nnindexer(use_memcache=False)
>>> remove_daid_list = [8, 9, 10, 11]
>>> K = 2
>>> qfx2_vec = ibs.get_annot_vecs(1, config2_=qreq_.get_internal_query_config2())
>>> # get before data
>>> (qfx2_idx1, qfx2_dist1) = nnindexer.knn(qfx2_vec, K)
>>> # execute test function
>>> nnindexer.remove_support(remove_daid_list)
>>> # test before data vs after data
>>> (qfx2_idx2, qfx2_dist2) = nnindexer.knn(qfx2_vec, K)
>>> ax2_nvecs = ut.dict_take(ut.dict_hist(nnindexer.idx2_ax), range(len(nnindexer.ax2_aid)))
>>> assert qfx2_idx2.max() < ax2_nvecs[0], 'should only get points from aid 7'
>>> assert qfx2_idx1.max() > ax2_nvecs[0], 'should get points from everyone'
rrr(verbose=True)

special class reloading function

save(nnindexer, cachedir=None, fpath=None, verbose=True)[source]

Caches a flann neighbor indexer to disk (not the data)

class ibeis.algo.hots.neighbor_index.NeighborIndex2(nnindexer, flann_params=None, cfgstr=None)[source]

Bases: ibeis.algo.hots.neighbor_index.NeighborIndex, utool.util_dev.NiceRepr

conditional_knn(nnindexer, qfx2_vec, num_neighbors, invalid_axs)[source]
>>> from ibeis.algo.hots.neighbor_index import *  # NOQA
>>> qreq_ = ibeis.testdata_qreq_(defaultdb='seaturtles')
>>> qreq_.load_indexer()
>>> qfx2_vec = qreq_.ibs.get_annot_vecs(qreq_.qaids[0])
>>> num_neighbors = 2
>>> nnindexer = qreq_.indexer
>>> ibs = qreq_.ibs
>>> qaid = 1
>>> qencid = ibs.get_annot_encounter_text([qaid])[0]
>>> ax2_encid = np.array(ibs.get_annot_encounter_text(nnindexer.ax2_aid))
>>> invalid_axs = np.where(ax2_encid == qencid)[0]
static get_support(depc, aid_list, config)[source]
on_load(nnindexer, depc)[source]
on_save(nnindexer, depc, fpath)[source]
rrr(verbose=True)

special class reloading function

ibeis.algo.hots.neighbor_index.get_support_data(qreq_, daid_list)[source]
ibeis.algo.hots.neighbor_index.invert_index(vecs_list, ax_list, verbose=True)[source]

Aggregates descriptors of input annotations and returns inverted information

Parameters:
  • vecs_list (list) –
  • ax_list (list) –
  • verbose (bool) – verbosity flag(default = True)
Returns:

(idx2_vec, idx2_ax, idx2_fx)

Return type:

tuple

CommandLine:

python -m ibeis.algo.hots.neighbor_index --test-invert_index

Example

>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.neighbor_index import *  # NOQA
>>> import vtool as vt
>>> num = 100
>>> rng = np.random.RandomState(0)
>>> ax_list = np.arange(num)
>>> vecs_list = [vt.tests.dummy.get_dummy_dpts(rng.randint(100)) for ax in ax_list]
>>> verbose = True
>>> (idx2_vec, idx2_ax, idx2_fx) = invert_index(vecs_list, ax_list, verbose)
ibeis.algo.hots.neighbor_index.prepare_index_data(aid_list, vecs_list, fgws_list, verbose=True)[source]

flattens vecs_list and builds a reverse index from the flattened indices (idx) to the original aids and fxs

ibeis.algo.hots.neighbor_index.test_nnindexer(*args, **kwargs)[source]

ibeis.algo.hots.neighbor_index_cache module

class ibeis.algo.hots.neighbor_index_cache.UUIDMapHyrbridCache[source]

Bases: object

Class that lets multiple ways of writing to the uuid_map be swapped in and out interchangably

TODO: the global read / write should periodically sync itself to disk and it should be loaded from disk initially

dump(cachedir)[source]
init(*args, **kwargs)[source]
load(cachedir)[source]

Returns a cache UUIDMap

read_uuid_map_dict(uuid_map_fpath, min_reindex_thresh)[source]

uses in memory dictionary instead of disk

write_uuid_map_dict(uuid_map_fpath, visual_uuid_list, daids_hashid)[source]

uses in memory dictionary instead of disk

let the multi-indexer know about any big caches we’ve made multi-indexer. Also lets nnindexer know about other prebuilt indexers so it can attempt to just add points to them as to avoid a rebuild.

ibeis.algo.hots.neighbor_index_cache.background_flann_func(cachedir, daid_list, vecs_list, fgws_list, flann_params, cfgstr, uuid_map_fpath, daids_hashid, visual_uuid_list, min_reindex_thresh)[source]

FIXME: Duplicate code

ibeis.algo.hots.neighbor_index_cache.build_nnindex_cfgstr(qreq_, daid_list)[source]

builds a string that uniquely identified an indexer built with parameters from the input query requested and indexing descriptor from the input annotation ids

Parameters:
  • qreq (QueryRequest) – query request object with hyper-parameters
  • daid_list (list) –
Returns:

nnindex_cfgstr

Return type:

str

CommandLine:

python -m ibeis.algo.hots.neighbor_index_cache --test-build_nnindex_cfgstr

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.neighbor_index_cache import *  # NOQA
>>> import ibeis
>>> ibs = ibeis.opendb(db='testdb1')
>>> daid_list = ibs.get_valid_aids(species=ibeis.const.TEST_SPECIES.ZEB_PLAIN)
>>> qreq_ = ibs.new_query_request(daid_list, daid_list, cfgdict=dict(fg_on=False))
>>> nnindex_cfgstr = build_nnindex_cfgstr(qreq_, daid_list)
>>> result = str(nnindex_cfgstr)
>>> print(result)

_VUUIDS((6)ylydksaqdigdecdd)_FLANN(8_kdtrees)_FeatureWeight(detector=cnn,sz256,thresh=20,ksz=20,enabled=False)_FeatureWeight(detector=cnn,sz256,thresh=20,ksz=20,enabled=False)

_VUUIDS((6)ylydksaqdigdecdd)_FLANN(8_kdtrees)_FEATWEIGHT(OFF)_FEAT(hesaff+sift_)_CHIP(sz450)

ibeis.algo.hots.neighbor_index_cache.can_request_background_nnindexer()[source]
ibeis.algo.hots.neighbor_index_cache.check_background_process()[source]

checks to see if the process has finished and then writes the uuid map to disk

ibeis.algo.hots.neighbor_index_cache.clear_memcache()[source]
ibeis.algo.hots.neighbor_index_cache.clear_uuid_cache(qreq_)[source]

CommandLine:

python -m ibeis.algo.hots.neighbor_index_cache --test-clear_uuid_cache

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.neighbor_index_cache import *  # NOQA
>>> import ibeis
>>> qreq_ = ibeis.testdata_qreq_(defaultdb='testdb1', p='default:fg_on=True')
>>> fgws_list = clear_uuid_cache(qreq_)
>>> result = str(fgws_list)
>>> print(result)
ibeis.algo.hots.neighbor_index_cache.get_data_cfgstr(ibs, daid_list)[source]

part 2 data hash id

ibeis.algo.hots.neighbor_index_cache.get_nnindexer_uuid_map_fpath(qreq_)[source]

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.neighbor_index_cache import *  # NOQA
>>> import ibeis
>>> qreq_ = ibeis.testdata_qreq_(defaultdb='testdb1')
>>> uuid_map_fpath = get_nnindexer_uuid_map_fpath(qreq_)
>>> result = str(ut.path_ndir_split(uuid_map_fpath, 3))
>>> print(result)
.../_ibeis_cache/flann/uuid_map_FLANN(8_kdtrees)_Feat(hesaff+sift)_Chip(sz700,width).cPkl

.../_ibeis_cache/flann/uuid_map_FLANN(8_kdtrees)_FEAT(hesaff+sift_)_CHIP(sz450).cPkl

ibeis.algo.hots.neighbor_index_cache.group_daids_by_cached_nnindexer(qreq_, daid_list, min_reindex_thresh, max_covers=None)[source]

CommandLine:

python -m ibeis.algo.hots.neighbor_index_cache --test-group_daids_by_cached_nnindexer

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.neighbor_index_cache import *  # NOQA
>>> import ibeis
>>> ibs = ibeis.opendb('testdb1')
>>> ZEB_PLAIN = ibeis.const.TEST_SPECIES.ZEB_PLAIN
>>> daid_list = ibs.get_valid_aids(species=ZEB_PLAIN)
>>> qreq_ = ibs.new_query_request(daid_list, daid_list)
>>> # Set the params a bit lower
>>> max_covers = None
>>> qreq_.qparams.min_reindex_thresh = 1
>>> min_reindex_thresh = qreq_.qparams.min_reindex_thresh
>>> # STEP 0: CLEAR THE CACHE
>>> clear_uuid_cache(qreq_)
>>> # STEP 1: ASSERT EMPTY INDEX
>>> daid_list = ibs.get_valid_aids(species=ZEB_PLAIN)[0:3]
>>> uncovered_aids, covered_aids_list = group_daids_by_cached_nnindexer(
...     qreq_, daid_list, min_reindex_thresh, max_covers)
>>> result1 = uncovered_aids, covered_aids_list
>>> ut.assert_eq(result1, ([1, 2, 3], []), 'pre request')
>>> # TEST 2: SHOULD MAKE 123 COVERED
>>> nnindexer = request_memcached_ibeis_nnindexer(qreq_, daid_list)
>>> uncovered_aids, covered_aids_list = group_daids_by_cached_nnindexer(
...     qreq_, daid_list, min_reindex_thresh, max_covers)
>>> result2 = uncovered_aids, covered_aids_list
>>> ut.assert_eq(result2, ([], [[1, 2, 3]]), 'post request')
ibeis.algo.hots.neighbor_index_cache.new_neighbor_index(daid_list, vecs_list, fgws_list, flann_params, cachedir, cfgstr, force_rebuild=False, verbose=True, memtrack=None)[source]

constructs neighbor index independent of ibeis

Parameters:
  • daid_list (list) –
  • vecs_list (list) –
  • fgws_list (list) –
  • flann_params (dict) –
  • flann_cachedir (None) –
  • nnindex_cfgstr (str) –
  • use_memcache (bool) –
Returns:

nnindexer

CommandLine:

python -m ibeis.algo.hots.neighbor_index_cache --test-new_neighbor_index

Example

>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.neighbor_index_cache import *  # NOQA
>>> import ibeis
>>> ibs = ibeis.opendb('testdb1')
>>> daid_list = ibs.get_valid_aids(species=ibeis.const.TEST_SPECIES.ZEB_PLAIN)
>>> qreq_ = ibs.new_query_request(daid_list, daid_list)
>>> nnindex_cfgstr = build_nnindex_cfgstr(qreq_, daid_list)
>>> verbose = True
>>> nnindex_cfgstr = build_nnindex_cfgstr(qreq_, daid_list)
>>> cfgstr = nnindex_cfgstr
>>> cachedir     = qreq_.ibs.get_flann_cachedir()
>>> flann_params = qreq_.qparams.flann_params
>>> # Get annot descriptors to index
>>> vecs_list, fgws_list = get_support_data(qreq_, daid_list)
>>> nnindexer = new_neighbor_index(daid_list, vecs_list, fgws_list, flann_params, cachedir, cfgstr, verbose=True)
>>> result = ('nnindexer.ax2_aid = %s' % (str(nnindexer.ax2_aid),))
>>> print(result)
nnindexer.ax2_aid = [1 2 3 4 5 6]
ibeis.algo.hots.neighbor_index_cache.print_uuid_cache(qreq_)[source]

CommandLine:

python -m ibeis.algo.hots.neighbor_index_cache --test-print_uuid_cache

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.neighbor_index_cache import *  # NOQA
>>> import ibeis
>>> qreq_ = ibeis.testdata_qreq_(defaultdb='PZ_Master0', p='default:fg_on=False')
>>> print_uuid_cache(qreq_)
>>> result = str(nnindexer)
>>> print(result)
ibeis.algo.hots.neighbor_index_cache.request_augmented_ibeis_nnindexer(qreq_, daid_list, verbose=True, use_memcache=True, force_rebuild=False, memtrack=None)[source]

DO NOT USE. THIS FUNCTION CAN CURRENTLY CAUSE A SEGFAULT

tries to give you an indexer for the requested daids using the least amount of computation possible. By loading and adding to a partially build nnindex if possible and if that fails fallbs back to request_memcache.

Parameters:
  • qreq (QueryRequest) – query request object with hyper-parameters
  • daid_list (list) –
Returns:

nnindex_cfgstr

Return type:

str

CommandLine:

python -m ibeis.algo.hots.neighbor_index_cache --test-request_augmented_ibeis_nnindexer

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.neighbor_index_cache import *  # NOQA
>>> import ibeis
>>> # build test data
>>> ZEB_PLAIN = ibeis.const.TEST_SPECIES.ZEB_PLAIN
>>> ibs = ibeis.opendb('testdb1')
>>> use_memcache, max_covers, verbose = True, None, True
>>> daid_list = ibs.get_valid_aids(species=ZEB_PLAIN)[0:6]
>>> qreq_ = ibs.new_query_request(daid_list, daid_list)
>>> qreq_.qparams.min_reindex_thresh = 1
>>> min_reindex_thresh = qreq_.qparams.min_reindex_thresh
>>> # CLEAR CACHE for clean test
>>> clear_uuid_cache(qreq_)
>>> # LOAD 3 AIDS INTO CACHE
>>> aid_list = ibs.get_valid_aids(species=ZEB_PLAIN)[0:3]
>>> # Should fallback
>>> nnindexer = request_augmented_ibeis_nnindexer(qreq_, aid_list)
>>> # assert the fallback
>>> uncovered_aids, covered_aids_list = group_daids_by_cached_nnindexer(
...     qreq_, daid_list, min_reindex_thresh, max_covers)
>>> result2 = uncovered_aids, covered_aids_list
>>> ut.assert_eq(result2, ([4, 5, 6], [[1, 2, 3]]), 'pre augment')
>>> # Should augment
>>> nnindexer = request_augmented_ibeis_nnindexer(qreq_, daid_list)
>>> uncovered_aids, covered_aids_list = group_daids_by_cached_nnindexer(
...     qreq_, daid_list, min_reindex_thresh, max_covers)
>>> result3 = uncovered_aids, covered_aids_list
>>> ut.assert_eq(result3, ([], [[1, 2, 3, 4, 5, 6]]), 'post augment')
>>> # Should fallback
>>> nnindexer2 = request_augmented_ibeis_nnindexer(qreq_, daid_list)
>>> assert nnindexer is nnindexer2
ibeis.algo.hots.neighbor_index_cache.request_background_nnindexer(qreq_, daid_list)[source]

FIXME: Duplicate code

Parameters:
  • qreq (QueryRequest) – query request object with hyper-parameters
  • daid_list (list) –

CommandLine:

python -m ibeis.algo.hots.neighbor_index_cache --test-request_background_nnindexer

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.neighbor_index_cache import *  # NOQA
>>> import ibeis
>>> # build test data
>>> ibs = ibeis.opendb('testdb1')
>>> daid_list = ibs.get_valid_aids(species=ibeis.const.TEST_SPECIES.ZEB_PLAIN)
>>> qreq_ = ibs.new_query_request(daid_list, daid_list)
>>> # execute function
>>> request_background_nnindexer(qreq_, daid_list)
>>> # verify results
>>> result = str(False)
>>> print(result)
ibeis.algo.hots.neighbor_index_cache.request_diskcached_ibeis_nnindexer(qreq_, daid_list, nnindex_cfgstr=None, verbose=True, force_rebuild=False, memtrack=None)[source]

builds new NeighborIndexer which will try to use a disk cached flann if available

Parameters:
  • qreq (QueryRequest) – query request object with hyper-parameters
  • daid_list (list) –
  • nnindex_cfgstr
  • verbose (bool) –
Returns:

nnindexer

Return type:

NeighborIndexer

CommandLine:

python -m ibeis.algo.hots.neighbor_index_cache --test-request_diskcached_ibeis_nnindexer

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.neighbor_index_cache import *  # NOQA
>>> import ibeis
>>> # build test data
>>> ibs = ibeis.opendb('testdb1')
>>> daid_list = ibs.get_valid_aids(species=ibeis.const.TEST_SPECIES.ZEB_PLAIN)
>>> qreq_ = ibs.new_query_request(daid_list, daid_list)
>>> nnindex_cfgstr = build_nnindex_cfgstr(qreq_, daid_list)
>>> verbose = True
>>> # execute function
>>> nnindexer = request_diskcached_ibeis_nnindexer(qreq_, daid_list, nnindex_cfgstr, verbose)
>>> # verify results
>>> result = str(nnindexer)
>>> print(result)
ibeis.algo.hots.neighbor_index_cache.request_ibeis_nnindexer(qreq_, verbose=True, use_memcache=True, force_rebuild=False)[source]

CALLED BY QUERYREQUST::LOAD_INDEXER IBEIS interface into neighbor_index_cache

Parameters:qreq (QueryRequest) – hyper-parameters
Returns:nnindexer
Return type:NeighborIndexer

CommandLine:

python -m ibeis.algo.hots.neighbor_index_cache --test-request_ibeis_nnindexer

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.neighbor_index_cache import *  # NOQA
>>> nnindexer, qreq_, ibs = test_nnindexer(None)
>>> nnindexer = request_ibeis_nnindexer(qreq_)
ibeis.algo.hots.neighbor_index_cache.request_memcached_ibeis_nnindexer(qreq_, daid_list, use_memcache=True, verbose=True, veryverbose=False, force_rebuild=False, allow_memfallback=True, memtrack=None)[source]

FOR INTERNAL USE ONLY takes custom daid list. might not be the same as what is in qreq_

CommandLine:

python -m ibeis.algo.hots.neighbor_index_cache --test-request_memcached_ibeis_nnindexer

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.neighbor_index_cache import *  # NOQA
>>> import ibeis
>>> # build test data
>>> ibs = ibeis.opendb('testdb1')
>>> qreq_.qparams.min_reindex_thresh = 3
>>> ZEB_PLAIN = ibeis.const.TEST_SPECIES.ZEB_PLAIN
>>> daid_list = ibs.get_valid_aids(species=ZEB_PLAIN)[0:3]
>>> qreq_ = ibs.new_query_request(daid_list, daid_list)
>>> verbose = True
>>> use_memcache = True
>>> # execute function
>>> nnindexer = request_memcached_ibeis_nnindexer(qreq_, daid_list, use_memcache)
>>> # verify results
>>> result = str(nnindexer)
>>> print(result)
ibeis.algo.hots.neighbor_index_cache.test_nnindexer(dbname='testdb1', with_indexer=True, use_memcache=True)[source]

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.neighbor_index_cache import *  # NOQA
>>> nnindexer, qreq_, ibs = test_nnindexer()

ibeis.algo.hots.nn_weights module

ibeis.algo.hots.nn_weights.apply_normweight(normweight_fn, qfx2_normk, qfx2_idx, qfx2_dist, Knorm)[source]

helper applies the normalized weight function to one query annotation

Parameters:
  • normweight_fn (func) – chosen weight function e.g. lnbnn
  • qaid (int) – query annotation id
  • qfx2_idx (ndarray[int32_t, ndims=2]) – mapping from query feature index to db neighbor index
  • qfx2_dist (ndarray) – mapping from query feature index to dist
  • Knorm (int) –
  • qreq (QueryRequest) – query request object with hyper-parameters
Returns:

qfx2_normweight

Return type:

ndarray

CommandLine:

python -m ibeis.algo.hots.nn_weights --test-apply_normweight

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.nn_weights import *  # NOQA
>>> from ibeis.algo.hots import nn_weights
>>> cfgdict = {'K':10, 'Knorm': 10, 'normalizer_rule': 'name', 'dim_size': 450, 'resize_dim': 'area'}
>>> tup = plh.testdata_pre_weight_neighbors(cfgdict=cfgdict)
>>> ibs, qreq_, nns_list, nnvalid0_list = tup
>>> qaid = qreq_.get_external_qaids()[0]
>>> Knorm = qreq_.qparams.Knorm
>>> normweight_fn = lnbnn_fn
>>> normalizer_rule  = qreq_.qparams.normalizer_rule
>>> (qfx2_idx, qfx2_dist) = nns_list[0]
>>> qfx2_normk = get_normk(qreq_, qaid, qfx2_idx, Knorm, normalizer_rule)
>>> qfx2_normweight = nn_weights.apply_normweight(
>>>   normweight_fn, qfx2_normk, qfx2_idx, qfx2_dist, Knorm)
>>> ut.assert_inbounds(qfx2_normweight.sum(), 600, 950)
ibeis.algo.hots.nn_weights.bar_l2_fn(vdist, ndist)[source]

The feature weight is (1 - the euclidian distance between the features). The normalizers are unused.

(not really a normaalized function)

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.nn_weights import *  # NOQA
>>> vdist, ndist = testdata_vn_dists()
>>> out = bar_l2_fn(vdist, ndist)
>>> result = ut.hz_str('barl2  = ', ut.repr2(out, precision=2))
>>> print(result)
barl2  = np.array([[ 1.  ,  0.6 ,  0.41],
                   [ 0.83,  0.7 ,  0.49],
                   [ 0.87,  0.58,  0.27],
                   [ 0.88,  0.63,  0.46],
                   [ 0.82,  0.53,  0.5 ]])
ibeis.algo.hots.nn_weights.borda_match_weighter(nns_list, nnvalid0_list, qreq_)[source]

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.nn_weights import *  # NOQA
>>> tup = plh.testdata_pre_weight_neighbors('PZ_MTEST')
>>> ibs, qreq_, nns_list, nnvalid0_list = tup
>>> bordavote_weight_list = borda_match_weighter(nns_list, nnvalid0_list, qreq_)
>>> result = ('bordavote_weight_list = %s' % (str(bordavote_weight_list),))
>>> print(result)
ibeis.algo.hots.nn_weights.const_match_weighter(nns_list, nnvalid0_list, qreq_)[source]

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.nn_weights import *  # NOQA
>>> tup = plh.testdata_pre_weight_neighbors('PZ_MTEST')
>>> ibs, qreq_, nns_list, nnvalid0_list = tup
>>> constvote_weight_list = borda_match_weighter(nns_list, nnvalid0_list, qreq_)
>>> result = ('constvote_weight_list = %s' % (str(constvote_weight_list),))
>>> print(result)
ibeis.algo.hots.nn_weights.cos_match_weighter(nns_list, nnvalid0_list, qreq_)[source]

Uses smk-like selectivity function. Need to gridsearch for a good alpha.

CommandLine:

python -m ibeis.algo.hots.nn_weights --test-cos_match_weighter

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.nn_weights import *  # NOQA
>>> from ibeis.algo.hots import nn_weights
>>> tup = plh.testdata_pre_weight_neighbors('PZ_MTEST', cfgdict=dict(cos_on=True, K=5, Knorm=5))
>>> ibs, qreq_, nns_list, nnvalid0_list = tup
>>> assert qreq_.qparams.cos_on, 'bug setting custom params cos_weight'
>>> cos_weight_list = nn_weights.cos_match_weighter(nns_list, nnvalid0_list, qreq_)
ibeis.algo.hots.nn_weights.distinctiveness_match_weighter(qreq_)[source]

TODO: finish intergration

Example

>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.nn_weights import *  # NOQA
>>> from ibeis.algo.hots import nn_weights
>>> tup = plh.testdata_pre_weight_neighbors('PZ_MTEST', codename='vsone_dist_extern_distinctiveness')
>>> ibs, qreq_, nns_list, nnvalid0_list = tup
ibeis.algo.hots.nn_weights.fg_match_weighter(nns_list, nnvalid0_list, qreq_)[source]

foreground feature match weighting

CommandLine:

python -m ibeis.algo.hots.nn_weights --exec-fg_match_weighter

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.nn_weights import *  # NOQA
>>> tup = plh.testdata_pre_weight_neighbors('PZ_MTEST')
>>> ibs, qreq_, nns_list, nnvalid0_list = tup
>>> print(ut.dict_str(qreq_.qparams.__dict__, sorted_=True))
>>> assert qreq_.qparams.fg_on == True, 'bug setting custom params fg_on'
>>> fgvotes_list = fg_match_weighter(nns_list, nnvalid0_list, qreq_)
>>> print('fgvotes_list = %r' % (fgvotes_list,))
ibeis.algo.hots.nn_weights.get_name_normalizers(qaid, qreq_, Knorm, qfx2_idx)[source]

helper normalizers for ‘name’ normalizer_rule

Parameters:
  • qaid (int) – query annotation id
  • qreq (QueryRequest) – hyper-parameters
  • Knorm (int) –
  • qfx2_idx (ndarray) –
Returns:

qfx2_normk

Return type:

ndarray

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.nn_weights import *  # NOQA
>>> from ibeis.algo.hots import nn_weights
>>> cfgdict = {'K':10, 'Knorm': 10, 'normalizer_rule': 'name'}
>>> tup = plh.testdata_pre_weight_neighbors(cfgdict=cfgdict)
>>> ibs, qreq_, nns_list, nnvalid0_list = tup
>>> Knorm = qreq_.qparams.Knorm
>>> (qfx2_idx, qfx2_dist) = nns_list[0]
>>> qaid = qreq_.get_external_qaids()[0]
>>> qfx2_normk = get_name_normalizers(qaid, qreq_, Knorm, qfx2_idx)
ibeis.algo.hots.nn_weights.get_normk(qreq_, qaid, qfx2_idx, Knorm, normalizer_rule)[source]

Get positions of the LNBNN/ratio tests normalizers

ibeis.algo.hots.nn_weights.gravity_match_weighter(nns_list, nnvalid0_list, qreq_)[source]
ibeis.algo.hots.nn_weights.lnbnn_fn(vdist, ndist)[source]

Locale Naive Bayes Nearest Neighbor weighting

References

http://www.cs.ubc.ca/~lowe/papers/12mccannCVPR.pdf http://www.cs.ubc.ca/~sanchom/local-naive-bayes-nearest-neighbor

Sympy:
>>> import sympy
>>> #https://github.com/sympy/sympy/pull/10247
>>> from sympy import log
>>> from sympy.stats import P, E, variance, Die, Normal, FiniteRV
>>> C, Cbar = sympy.symbols('C Cbar')
>>> d_i = Die(sympy.symbols('di'), 6)
>>> log(P(di, C) / P(di, Cbar))
>>> #
>>> PdiC, PdiCbar = sympy.symbols('PdiC, PdiCbar')
>>> oddsC = log(PdiC / PdiCbar)
>>> sympy.simplify(oddsC)
>>> import vtool as vt
>>> vt.check_expr_eq(oddsC, log(PdiC) - log(PdiCbar))

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.nn_weights import *  # NOQA
>>> vdist, ndist = testdata_vn_dists()
>>> out = lnbnn_fn(vdist, ndist)
>>> result = ut.hz_str('lnbnn  = ', ut.repr2(out, precision=2))
>>> print(result)
lnbnn  = np.array([[ 0.62,  0.22,  0.03],
                   [ 0.35,  0.22,  0.01],
                   [ 0.87,  0.58,  0.27],
                   [ 0.67,  0.42,  0.25],
                   [ 0.59,  0.3 ,  0.27]])
ibeis.algo.hots.nn_weights.loglnbnn_fn(vdist, ndist)[source]

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.nn_weights import *  # NOQA
>>> vdist, ndist = testdata_vn_dists()
>>> out = loglnbnn_fn(vdist, ndist)
>>> result = ut.hz_str('loglnbnn  = ', ut.repr2(out, precision=2))
>>> print(result)
loglnbnn  = np.array([[ 0.48,  0.2 ,  0.03],
                      [ 0.3 ,  0.2 ,  0.01],
                      [ 0.63,  0.46,  0.24],
                      [ 0.51,  0.35,  0.22],
                      [ 0.46,  0.26,  0.24]])
ibeis.algo.hots.nn_weights.logratio_fn(vdist, ndist)[source]

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.nn_weights import *  # NOQA
>>> vdist, ndist = testdata_vn_dists()
>>> out = normonly_fn(vdist, ndist)
>>> result = ut.repr2(out)
>>> print(result)
np.array([[ 0.62,  0.62,  0.62],
          [ 0.52,  0.52,  0.52],
          [ 1.  ,  1.  ,  1.  ],
          [ 0.79,  0.79,  0.79],
          [ 0.77,  0.77,  0.77]])
ibeis.algo.hots.nn_weights.mark_name_valid_normalizers(qnid, qfx2_topnid, qfx2_normnid)[source]

Helper func that allows matches only to the first result for a name

Each query feature finds its K matches and Kn normalizing matches. These are the candidates from which it can choose a set of matches and a single normalizer.

A normalizer is marked as invalid if it belongs to a name that was also in its feature’s candidate matching set.

Parameters:
  • qfx2_topnid (ndarray) – marks the names a feature matches
  • qfx2_normnid (ndarray) – marks the names of the feature normalizers
  • qnid (int) – query name id
Returns:

qfx2_selnorm - index of the selected normalizer for each query feature

CommandLine:

python -m ibeis.algo.hots.nn_weights --exec-mark_name_valid_normalizers

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.nn_weights import *  # NOQA
>>> qnid = 1
>>> qfx2_topnid = np.array([[1, 1, 1, 1, 1],
...                         [1, 2, 1, 1, 1],
...                         [1, 2, 2, 3, 1],
...                         [5, 8, 9, 8, 8],
...                         [5, 8, 9, 8, 8],
...                         [6, 6, 9, 6, 8],
...                         [5, 8, 6, 6, 6],
...                         [1, 2, 8, 6, 6]], dtype=np.int32)
>>> qfx2_normnid = np.array([[ 1, 1, 1],
...                          [ 2, 3, 1],
...                          [ 2, 3, 1],
...                          [ 6, 6, 6],
...                          [ 6, 6, 8],
...                          [ 2, 6, 6],
...                          [ 6, 6, 1],
...                          [ 4, 4, 9]], dtype=np.int32)
>>> qfx2_selnorm = mark_name_valid_normalizers(qnid, qfx2_topnid, qfx2_normnid)
>>> K = len(qfx2_topnid.T)
>>> Knorm = len(qfx2_normnid.T)
>>> qfx2_normk_ = qfx2_selnorm + (Knorm)  # convert form negative to pos indexes
>>> result = str(qfx2_normk_)
>>> print(result)
[2 1 2 0 0 0 2 0]
ibeis.algo.hots.nn_weights.nn_normalized_weight(normweight_fn, nns_list, nnvalid0_list, qreq_)[source]

Generic function to weight nearest neighbors

ratio, lnbnn, and other nearest neighbor based functions use this

Parameters:
  • normweight_fn (func) – chosen weight function e.g. lnbnn
  • nns_list (dict) – query descriptor nearest neighbors and distances. (qfx2_nnx, qfx2_dist)
  • nnvalid0_list (list) – list of neighbors preflagged as valid
  • qreq (QueryRequest) – hyper-parameters
Returns:

weights_list

Return type:

list

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.nn_weights import *  # NOQA
>>> from ibeis.algo.hots import nn_weights
>>> tup = plh.testdata_pre_weight_neighbors('PZ_MTEST')
>>> ibs, qreq_, nns_list, nnvalid0_list = tup
>>> normweight_fn = lnbnn_fn
>>> weights_list1, normk_list1 = nn_weights.nn_normalized_weight(normweight_fn, nns_list, nnvalid0_list, qreq_)
>>> weights1 = weights_list1[0]
>>> nn_normonly_weight = nn_weights.NN_WEIGHT_FUNC_DICT['lnbnn']
>>> weights_list2, normk_list2 = nn_normonly_weight(nns_list, nnvalid0_list, qreq_)
>>> weights2 = weights_list2[0]
>>> assert np.all(weights1 == weights2)
>>> ut.assert_inbounds(weights1.sum(), 100, 310)

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.nn_weights import *  # NOQA
>>> from ibeis.algo.hots import nn_weights
>>> tup = plh.testdata_pre_weight_neighbors('PZ_MTEST')
>>> ibs, qreq_, nns_list, nnvalid0_list = tup
>>> normweight_fn = ratio_fn
>>> weights_list1, normk_list1 = nn_weights.nn_normalized_weight(normweight_fn, nns_list, nnvalid0_list, qreq_)
>>> weights1 = weights_list1[0]
>>> nn_normonly_weight = nn_weights.NN_WEIGHT_FUNC_DICT['ratio']
>>> weights_list2, normk_list2 = nn_normonly_weight(nns_list, nnvalid0_list, qreq_)
>>> weights2 = weights_list2[0]
>>> assert np.all(weights1 == weights2)
>>> ut.assert_inbounds(weights1.sum(), 1500, 4500)
ibeis.algo.hots.nn_weights.normonly_fn(vdist, ndist)[source]

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.nn_weights import *  # NOQA
>>> vdist, ndist = testdata_vn_dists()
>>> out = normonly_fn(vdist, ndist)
>>> result = ut.repr2(out)
>>> print(result)
np.array([[ 0.62,  0.62,  0.62],
          [ 0.52,  0.52,  0.52],
          [ 1.  ,  1.  ,  1.  ],
          [ 0.79,  0.79,  0.79],
          [ 0.77,  0.77,  0.77]])
ibeis.algo.hots.nn_weights.ratio_fn(vdist, ndist)[source]
Parameters:
  • vdist (ndarray) – voting array
  • ndist (ndarray) – normalizing array
Returns:

out

Return type:

ndarray

Example1:
>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.nn_weights import *  # NOQA
>>> vdist, ndist = testdata_vn_dists()
>>> out = ratio_fn(vdist, ndist)
>>> result = ut.hz_str('ratio = ', ut.repr2(out, precision=2))
>>> print(result)
ratio = np.array([[ 0.  ,  0.65,  0.95],
                  [ 0.33,  0.58,  0.98],
                  [ 0.13,  0.42,  0.73],
                  [ 0.15,  0.47,  0.68],
                  [ 0.23,  0.61,  0.65]])
ibeis.algo.hots.nn_weights.test_all_normalized_weights()[source]

CommandLine:

python -m ibeis.algo.hots.nn_weights --exec-test_all_normalized_weights

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.nn_weights import *  # NOQA
>>> test_all_normalized_weights()
ibeis.algo.hots.nn_weights.testdata_vn_dists(nfeats=5, K=3)[source]

Test voting and normalizing distances

Returns:(vdist, ndist) - test voting distances and normalizer distances
Return type:tuple

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.nn_weights import *  # NOQA
>>> vdist, ndist = testdata_vn_dists()
>>> result = (ut.hz_str('vdist = ', ut.repr2(vdist))) + '\n'
>>> result += (ut.hz_str('ndist = ', ut.repr2(ndist)))
vdist = np.array([[ 0.  ,  0.4 ,  0.59],
                  [ 0.17,  0.3 ,  0.51],
                  [ 0.13,  0.42,  0.73],
                  [ 0.12,  0.37,  0.54],
                  [ 0.18,  0.47,  0.5 ]])
ndist = np.array([[ 0.62],
                  [ 0.52],
                  [ 1.  ],
                  [ 0.79],
                  [ 0.77]])

ibeis.algo.hots.old_chip_match module

class ibeis.algo.hots.old_chip_match.AlignedListDictProxy(key2_idx, key_list, val_list)[source]

Bases: utool.util_dev.DictLike_old

simulates a dict when using parallel lists the point of this class is that when there are many instances of this class, then key2_idx can be shared between them. Ideally this class wont be used and will disappear when the parallel lists are being used properly.

DEPCIRATE AlignedListDictProxy’s defaultdict behavior is weird

iteritems()[source]
iterkeys()[source]
itervalues()[source]
pop(key)[source]

ibeis.algo.hots.pgm_ext module

class ibeis.algo.hots.pgm_ext.ApproximateFactor(state_idxs, weights, variables, statename_dict=None)[source]

Bases: object

Instead of holding a weight for all possible states, an approximate factor simply lists a set of (potentially duplicate) states. Each state has a weight that is approximately proportional to the probability of that state.

The main difference is that the cardinality are implicit and the row labels are explicit. In a normal factor it is reversed.

Maybe rename to sparse factor?

CommandLine:

python -m ibeis.algo.hots.pgm_ext --exec-ApproximateFactor --show

Example

>>> # UNSTABLE_DOCTEST
>>> from ibeis.algo.hots.pgm_ext import *  # NOQA
>>> state_idxs = [[1, 1, 1], [1, 0, 1], [2, 0, 2]]
>>> weights = [.1, .2, .1]
>>> variables = ['v1', 'v2', 'v3']
>>> self = ApproximateFactor(state_idxs, weights, variables)
>>> result = str(self)
>>> print(result)
cardinality
consolidate(inplace=False)[source]

removes duplicate entries

Example

>>> # UNSTABLE_DOCTEST
>>> from ibeis.algo.hots.pgm_ext import *  # NOQA
>>> state_idxs = [[1, 0, 1], [1, 0, 1], [1, 0, 2]]
>>> weights = [.1, .2, .1]
>>> variables = ['v1', 'v2', 'v3']
>>> self = ApproximateFactor(state_idxs, weights, variables)
>>> inplace = False
>>> phi = self.consolidate(inplace)
>>> result = str(phi)
>>> print(result)
+------+------+------+-----------------------+
| v1   | v2   | v3   |   \hat{phi}(v1,v2,v3) |
|------+------+------+-----------------------|
| v1_1 | v2_0 | v3_1 |                0.3000 |
| v1_1 | v2_0 | v3_2 |                0.1000 |
+------+------+------+-----------------------+
copy()[source]

Returns a copy of the factor.

classmethod from_sampled(sampled, variables=None, statename_dict=None)[source]

convert sampled states into an approximate factor

get_sparse_values()[source]
marginalize(variables, inplace=True)[source]

Modifies the factor with marginalized values.

Parameters:
  • variables (list, array-like) – List of variables over which to marginalize.
  • inplace (bool) – If inplace=True it will modify the factor itself, else would return a new factor.
Returns:

if inplace=True (default) returns None if inplace=False returns a new Factor instance.

Return type:

Factor or None

CommandLine:

python -m ibeis.algo.hots.pgm_ext marginalize --show

Example

>>> from ibeis.algo.hots.pgm_ext import *  # NOQA
>>> state_idxs = [[1, 1, 1], [1, 0, 1], [2, 0, 2]]
>>> weights = [.1, .2, .1]
>>> variables = ['v1', 'v2', 'v3']
>>> self = ApproximateFactor(state_idxs, weights, variables)
>>> variables = ['v2']
>>> inplace = False
>>> phi = self.marginalize(variables, inplace)
>>> print(phi)
+------+------+--------------------+
| v1   | v3   |   \hat{phi}(v1,v3) |
|------+------+--------------------|
| v1_1 | v3_1 |             0.3000 |
| v1_2 | v3_2 |             0.1000 |
+------+------+--------------------+
normalize(inplace=True)[source]

Normalizes the weights of factor so that they sum to 1.

Parameters:inplace (bool) – (default = True)

CommandLine:

python -m ibeis.algo.hots.pgm_ext --exec-normalize

Example

>>> # UNSTABLE_DOCTEST
>>> from ibeis.algo.hots.pgm_ext import *  # NOQA
>>> state_idxs = [[0, 0, 1], [1, 0, 1], [2, 0, 2]]
>>> weights = [.1, .2, .1]
>>> variables = ['v1', 'v2', 'v3']
>>> self = ApproximateFactor(state_idxs, weights, variables)
>>> inplace = True
>>> print(self)
>>> self.normalize(inplace)
>>> result = ('%s' % (self,))
>>> print(result)
+------+------+------+-----------------------+
| v1   | v2   | v3   |   \hat{phi}(v1,v2,v3) |
|------+------+------+-----------------------|
| v1_0 | v2_0 | v3_1 |                0.2500 |
| v1_1 | v2_0 | v3_1 |                0.5000 |
| v1_2 | v2_0 | v3_2 |                0.2500 |
+------+------+------+-----------------------+
reorder(order=None, inplace=True)[source]

Changes internal variable ordering

CommandLine:

python -m ibeis.algo.hots.pgm_ext --exec-reorder

Example

>>> # UNSTABLE_DOCTEST
>>> from ibeis.algo.hots.pgm_ext import *  # NOQA
>>> state_idxs = [[0, 0, 1], [1, 0, 1], [2, 0, 2]]
>>> weights = [.1, .2, .1]
>>> variables = ['v1', 'v2', 'v3']
>>> self = ApproximateFactor(state_idxs, weights, variables)
>>> order = [2, 0, 1]
>>> inplace = True
>>> print(self)
>>> self.reorder(order, inplace)
>>> result = ('%s' % (self,))
>>> print(result)
+------+------+------+-----------------------+
| v3   | v1   | v2   |   \hat{phi}(v3,v1,v2) |
|------+------+------+-----------------------|
| v3_1 | v1_0 | v2_0 |                0.1000 |
| v3_1 | v1_1 | v2_0 |                0.2000 |
| v3_2 | v1_2 | v2_0 |                0.1000 |
+------+------+------+-----------------------+
scope()[source]
values
class ibeis.algo.hots.pgm_ext.TemplateCPD(ttype, basis, varpref=None, evidence_ttypes=None, pmf_func=None, special_basis_pool=None)[source]

Bases: object

Factory for templated cpds

Parameters:
  • ttype
  • basis
  • varpref (None) – Letter to use as the random variable
  • evidence_ttypes (None) – (default = None)
  • pmf_func (None) – (default = None)
  • special_basis_pool (None) – (default = None)

CommandLine:

python -m ibeis.algo.hots.pgm_ext TemplateCPD --show

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.pgm_ext import *  # NOQA
>>> self = TemplateCPD('coin', ['fair', 'bias'], varpref='C')
>>> cpd = self.new_cpd(0)
>>> print(cpd)
example_cpd(id_=0)[source]
new_cpd(parents=None, pmf_func=None)[source]

Makes a new random variable that is an instance of this tempalte

parents : only used to define the name of this node.

ibeis.algo.hots.pgm_ext.coin_example()[source]

Simple example of conditional independence.

Notes

We are given a coin. We do not know if it is fair or unfair. There is an equal chance of either. (If it is unfair it has a a 9-to-1 odds). Initially, the results a coin toss are initially conditionally

independant of any other toss.
However, if we observe a heads on the first toss the chance of heads
on the second toss will increase.

CommandLine:

python -m ibeis.algo.hots.pgm_ext --exec-coin_example
python -m ibeis.algo.hots.pgm_ext --exec-coin_example --show
python -m ibeis.algo.hots.pgm_ext --exec-coin_example --show --cmd

Example

>>> # UNSTABLE_DOCTEST
>>> from ibeis.algo.hots.pgm_ext import *  # NOQA
>>> model = coin_example()
>>> model.print_templates()
>>> model.print_priors()
>>> query_vars = ['T02']
>>> infr = pgmpy.inference.VariableElimination(model)
>>> # Inference (1)
>>> print('(1.a) Observe nothing')
>>> evidence1 = {}
>>> factor_list1 = infr.query(query_vars, evidence1).values()
>>> print_factors(model, factor_list1)
>>> print('(1.b)  nothing changes')
>>> # Inference (2)
>>> print('(2.a) Observe that toss 1 was heads')
>>> evidence2 = model._ensure_internal_evidence({'T01': 'heads'})
>>> factor_list2 = infr.query(query_vars, evidence2).values()
>>> print_factors(model, factor_list2)
>>> #
>>> phi1 = factor_list1[0]
>>> phi2 = factor_list2[0]
>>> assert phi2['heads'] > phi1['heads']
>>> print('(2.b) Slightly more likely to see heads in the second coin toss')
>>> #
>>> # print('Observe that toss 1 was tails')
>>> # evidence = model._ensure_internal_evidence({'T01': 'tails'})
>>> # factor_list2 = infr.query(query_vars, evidence).values()
>>> # print_factors(model, factor_list2)
>>> ut.quit_if_noshow()
>>> import plottool as pt
>>> from ibeis.algo.hots import bayes
>>> kw = bayes.cluster_query(model, query_vars,evidence2,
>>>                          method='bp', operation='marginalize')
>>> #model.show_model(fnum=1)
>>> #model.show_model(fnum=2, evidence=evidence2, factor_list=factor_list2)
>>> model.show_model(fnum=3, evidence=evidence2, **kw)
>>> model.show_markov_model(fnum=4, evidence=evidence2, factor_list=factor_list2)
>>> model.show_junction_tree(fnum=5, evidence=evidence2, factor_list=factor_list2)
>>> #netx.draw_graphviz(model, with_labels=True)
>>> ut.show_if_requested()
ibeis.algo.hots.pgm_ext.customize_model(model)[source]
ibeis.algo.hots.pgm_ext.define_model(cpd_list)[source]

Custom extensions of pgmpy modl

ibeis.algo.hots.pgm_ext.map_example()[source]

CommandLine:

python -m ibeis.algo.hots.pgm_ext --exec-map_example --show

References

https://class.coursera.org/pgm-003/lecture/44

Example

>>> # UNSTABLE_DOCTEST
>>> from ibeis.algo.hots.pgm_ext import *  # NOQA
>>> model = map_example()
>>> ut.quit_if_noshow()
>>> #netx.draw_graphviz(model, with_labels=True)
>>> pgm_viz.show_model(model, fnum=1)
>>> ut.show_if_requested()
ibeis.algo.hots.pgm_ext.mustbe_example()[source]

Simple example where observing F0 forces N0 to take on a value.

CommandLine:

python -m ibeis.algo.hots.pgm_ext --exec-mustbe_example --show

Example

>>> # UNSTABLE_DOCTEST
>>> from ibeis.algo.hots.pgm_ext import *  # NOQA
>>> model = mustbe_example()
>>> model.print_templates()
>>> model.print_priors()
>>> #infr = pgmpy.inference.VariableElimination(model)
>>> infr = pgmpy.inference.BeliefPropagation(model)
>>> print('Observe: ' + ','.join(model.pretty_evidence({})))
>>> factor_list1 = infr.query(['N0'], {}).values()
>>> map1 = infr.map_query(['N0'], evidence={})
>>> print('map1 = %r' % (map1,))
>>> print_factors(model, factor_list1)
>>> #
>>> evidence = model._ensure_internal_evidence({'F0': 'true'})
>>> print('Observe: ' + ','.join(model.pretty_evidence(evidence)))
>>> factor_list2 = infr.query(['N0'], evidence).values()
>>> map2 = infr.map_query(['N0'], evidence)
>>> print('map2 = %r' % (map2,))
>>> print_factors(model, factor_list2)
>>> #
>>> evidence = model._ensure_internal_evidence({'F0': 'false'})
>>> print('Observe: ' + ','.join(model.pretty_evidence(evidence)))
>>> factor_list3 = infr.query(['N0'], evidence).values()
>>> map3 = infr.map_query(['N0'], evidence)
>>> print('map3 = %r' % (map3,))
>>> print_factors(model, factor_list3)
>>> #
>>> phi1 = factor_list1[0]
>>> phi2 = factor_list2[0]
>>> assert phi1['fred'] == phi1['sue'], 'should be uniform'
>>> assert phi2['fred'] == 1, 'should be 1'
>>> ut.quit_if_noshow()
>>> #netx.draw_graphviz(model, with_labels=True)
>>> import plottool as pt
>>> pgm_viz.show_model(model, fnum=1)
>>> pgm_viz.show_model(model, fnum=2, evidence=evidence, factor_list=factor_list2)
>>> ut.show_if_requested()
ibeis.algo.hots.pgm_ext.print_factors(model, factor_list)[source]
ibeis.algo.hots.pgm_ext.test_markovmodel()[source]
>>> from ibeis.algo.hots.pgm_ext import *  # NOQA

ibeis.algo.hots.pgm_viz module

ibeis.algo.hots.pgm_viz.draw_bayesian_model(model, evidence={}, soft_evidence={}, fnum=None, pnum=None, **kwargs)[source]
ibeis.algo.hots.pgm_viz.draw_junction_tree(model, fnum=None, **kwargs)[source]
ibeis.algo.hots.pgm_viz.draw_map_histogram(top_assignments, fnum=None, pnum=(1, 1, 1))[source]
ibeis.algo.hots.pgm_viz.draw_markov_model(model, fnum=None, **kwargs)[source]
ibeis.algo.hots.pgm_viz.get_bayesnet_layout(model, name_nodes=None, prog=u'dot')[source]

Ensures ordering of layers is in order of addition via templates

ibeis.algo.hots.pgm_viz.get_node_viz_attrs(model, evidence, soft_evidence, factor_list, ttype_colors, **kwargs)[source]
ibeis.algo.hots.pgm_viz.make_colorcodes(model)[source]

python -m ibeis.algo.hots.bayes –exec-make_name_model –show python -m ibeis.algo.hots.bayes –exec-cluster_query –show python -m ibeis –tf demo_bayesnet –ev :nA=4,nS=2,Na=n0,rand_scores=True –show –verbose python -m ibeis –tf demo_bayesnet –ev :nA=4,nS=3,Na=n0,rand_scores=True –show –verbose

ibeis.algo.hots.pgm_viz.make_factor_text(factor, name)[source]
ibeis.algo.hots.pgm_viz.print_ascii_graph(model_)[source]

pip install img2txt.py

python -c

ibeis.algo.hots.pgm_viz.show_bayesian_model(model, evidence={}, soft_evidence={}, fnum=None, **kwargs)[source]

References

http://stackoverflow.com/questions/22207802/networkx-node-level-or-layer

CommandLine:

python -m ibeis.algo.hots.pgm_viz --exec-show_model --show

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.pgm_viz import *  # NOQA
>>> model = '?'
>>> evidence = {}
>>> soft_evidence = {}
>>> result = show_model(model, evidence, soft_evidence)
>>> print(result)
>>> ut.quit_if_noshow()
>>> import plottool as pt
>>> ut.show_if_requested()
ibeis.algo.hots.pgm_viz.show_junction_tree(*args, **kwargs)[source]
ibeis.algo.hots.pgm_viz.show_markov_model(*args, **kwargs)[source]
ibeis.algo.hots.pgm_viz.show_model(model, *args, **kwargs)[source]

ibeis.algo.hots.pipeline module

Hotspotter pipeline module

Module Notation and Concepts::

PREFIXES: qaid2_XXX - prefix mapping query chip index to qfx2_XXX - prefix mapping query chip feature index to

  • nns - a (qfx2_idx, qfx2_dist) tuple
  • idx - the index into the nnindexers descriptors
  • qfx - query feature index wrt the query chip
  • dfx - query feature index wrt the database chip
  • dist - the distance to a corresponding feature
  • fm - a list of feature match pairs / correspondences (qfx, dfx)
  • fsv - a score vector of a corresponding feature
  • valid - a valid bit for a corresponding feature

PIPELINE_VARS:: nns_list - maping from query chip index to nns

  • qfx2_idx - ranked list of query feature indexes to database feature indexes
  • qfx2_dist - ranked list of query feature indexes to database feature indexes
  • qaid2_norm_weight - mapping from qaid to (qfx2_normweight, qfx2_selnorm)

    = qaid2_nnfiltagg[qaid]

CommandLine:

To see the ouput of a complete pipeline run use

# Set to whichever database you like
python main.py --db PZ_MTEST --setdb
python main.py --db NAUT_test --setdb
python main.py --db testdb1 --setdb

# Then run whichever configuration you like
python main.py --query 1 --yes --noqcache -t default:codename=vsone
python main.py --query 1 --yes --noqcache -t default:codename=vsone_norm
python main.py --query 1 --yes --noqcache -t default:codename=vsmany
python main.py --query 1 --yes --noqcache -t default:codename=vsmany_nsum
TODO:
  • Don’t preload the nn-indexer in case the nearest neighbors have already

been computed?

ibeis.algo.hots.pipeline.ValidMatchTup_

alias of vmt

ibeis.algo.hots.pipeline.WeightRet_

alias of weight_ret

ibeis.algo.hots.pipeline.baseline_neighbor_filter(qreq_, nns_list, impossible_daids_list, verbose=False)[source]

Removes matches to self, the same image, or the same name.

CommandLine:

python -m ibeis.algo.hots.pipeline --test-baseline_neighbor_filter

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.pipeline import *   # NOQA
>>> import ibeis
>>> qreq_, nns_list, impossible_daids_list = plh.testdata_pre_baselinefilter(qaid_list=[1, 2, 3, 4], codename='vsmany')
>>> nnvalid0_list = baseline_neighbor_filter(qreq_, nns_list, impossible_daids_list)
>>> ut.assert_eq(len(nnvalid0_list), len(qreq_.get_external_qaids()))
>>> #ut.assert_eq(nnvalid0_list[0].shape[1], qreq_.qparams.K, 'does not match k')
>>> #ut.assert_eq(qreq_.qparams.K, 4, 'k is not 4')
>>> assert not np.any(nnvalid0_list[0][:, 0]), (
...    'first col should be all invalid because of self match')
>>> assert not np.all(nnvalid0_list[0][:, 1]), (
...    'second col should have some good matches')
>>> ut.assert_inbounds(nnvalid0_list[0].sum(), 1000, 10000)
Example1:
>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.pipeline import *   # NOQA
>>> import ibeis
>>> qreq_, nns_list, impossible_daids_list = plh.testdata_pre_baselinefilter(codename='vsone')
>>> nnvalid0_list = baseline_neighbor_filter(qreq_, nns_list, impossible_daids_list)
>>> ut.assert_eq(len(nnvalid0_list), len(qreq_.get_external_daids()))
>>> ut.assert_eq(qreq_.qparams.K, 1, 'k is not 1')
>>> ut.assert_eq(nnvalid0_list[0].shape[1], qreq_.qparams.K, 'does not match k')
>>> ut.assert_eq(nnvalid0_list[0].sum(), 0, 'no self matches')
>>> ut.assert_inbounds(nnvalid0_list[1].sum(), 200, 1500)
ibeis.algo.hots.pipeline.build_chipmatches(qreq_, nns_list, nnvalid0_list, filtkey_list, filtweights_list, filtvalids_list, filtnormks_list, verbose=False)[source]

pipeline step 4 - builds sparse chipmatches

Takes the dense feature matches from query feature to (what could be any) database features and builds sparse matching pairs for each annotation to annotation match.

CommandLine:

python -m ibeis --tf build_chipmatches
python -m ibeis --tf build_chipmatches:0 --show
python -m ibeis --tf build_chipmatches:1 --show
Example0:
>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.pipeline import *  # NOQA
>>> qreq_, args = plh.testdata_pre('build_chipmatches', p=['default:codename=vsmany'])
>>> nns_list, nnvalid0_list, filtkey_list, filtweights_list, filtvalids_list, filtnormks_list = args
>>> verbose = True
>>> # execute function
>>> cm_list = build_chipmatches(qreq_, *args, verbose=verbose)
>>> # verify results
>>> [cm.assert_self(qreq_) for cm in cm_list]
>>> cm = cm_list[0]
>>> fm = cm.fm_list[cm.daid2_idx[2]]
>>> num_matches = len(fm)
>>> print('vsone num_matches = %r' % num_matches)
>>> ut.assert_inbounds(num_matches, 500, 2000, 'vsmany nmatches out of bounds')
>>> ut.quit_if_noshow()
>>> cm_list[0].show_single_annotmatch(qreq_)
>>> ut.show_if_requested()
Example1:
>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.pipeline import *  # NOQA
>>> verbose = True
>>> qreq_, args = plh.testdata_pre('build_chipmatches', p=['default:codename=vsone,sqrd_dist_on=True'])
>>> nns_list, nnvalid0_list, filtkey_list, filtweights_list, filtvalids_list, filtnormks_list = args
>>> # execute function
>>> cm_list = build_chipmatches(qreq_, *args, verbose=verbose)
>>> # verify results
>>> [cm.assert_self(qreq_) for cm in cm_list]
>>> cm = cm_list[0]
>>> fm = cm.fm_list[cm.daid2_idx[2]]
>>> num_matches = len(fm)
>>> print('vsone num_matches = %r' % num_matches)
>>> ut.assert_inbounds(num_matches, 25, 100, 'vsone nmatches out of bounds')
>>> ut.quit_if_noshow()
>>> cm.show_single_annotmatch(qreq_, daid=2)
>>> ut.show_if_requested()
ibeis.algo.hots.pipeline.build_impossible_daids_list(qreq_, verbose=False)[source]
Parameters:qreq (QueryRequest) – query request object with hyper-parameters

CommandLine:

python -m ibeis.algo.hots.pipeline --test-build_impossible_daids_list

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.pipeline import *  # NOQA
>>> import ibeis
>>> # build test data
>>> ibs = ibeis.opendb('testdb1')
>>> species = ibeis.const.TEST_SPECIES.ZEB_PLAIN
>>> daids = ibs.get_valid_aids(species=species)
>>> qaids = ibs.get_valid_aids(species=species)
>>> qreq_ = ibs.new_query_request(qaids, daids,
>>>                               cfgdict=dict(codename='vsmany',
>>>                                            use_k_padding=True,
>>>                                            can_match_sameimg=False,
>>>                                            can_match_samename=False))
>>> # execute function
>>> impossible_daids_list, Kpad_list = build_impossible_daids_list(qreq_)
>>> # verify results
>>> result = str((impossible_daids_list, Kpad_list))
>>> print(result)
([array([1]), array([2, 3]), array([2, 3]), array([4]), array([5, 6]), array([5, 6])], [1, 2, 2, 1, 2, 2])
ibeis.algo.hots.pipeline.cachemiss_nn_compute_fn(flags_list, qreq_, Kpad_list, K, Knorm, single_name_condition, verbose)[source]
ibeis.algo.hots.pipeline.compute_matching_dlen_extent(qreq_, fm_list, kpts_list)[source]

helper for spatial verification, computes the squared diagonal length of matching chips

CommandLine:

python -m ibeis.algo.hots.pipeline --test-compute_matching_dlen_extent

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.pipeline import *  # NOQA
>>> ibs, qreq_, cm_list = plh.testdata_pre_sver('PZ_MTEST')
>>> verbose = True
>>> cm = cm_list[0]
>>> cm.set_cannonical_annot_score(cm.get_num_matches_list())
>>> cm.sortself()
>>> fm_list = cm.fm_list
>>> kpts_list = qreq_.ibs.get_annot_kpts(cm.daid_list.tolist(), config2_=qreq_.get_external_data_config2())
>>> topx2_dlen_sqrd = compute_matching_dlen_extent(qreq_, fm_list, kpts_list)
>>> ut.assert_inbounds(np.sqrt(topx2_dlen_sqrd)[0:5], 600, 1500)
ibeis.algo.hots.pipeline.get_sparse_matchinfo_nonagg(qreq_, qfx2_idx, qfx2_valid0, qfx2_score_list, qfx2_valid_list, qfx2_normk_list, Knorm)[source]

builds sparse iterator that generates feature match pairs, scores, and ranks

Returns:vmt a tuple of corresponding lists. Each item in the list corresponds to a daid, dfx, scorevec, rank, norm_aid, norm_fx...
Return type:ValidMatchTup_

CommandLine:

python -m ibeis.algo.hots.pipeline --test-get_sparse_matchinfo_nonagg:0 --show
python -m ibeis.algo.hots.pipeline --test-get_sparse_matchinfo_nonagg:1 --show

utprof.py -m ibeis.algo.hots.pipeline --test-get_sparse_matchinfo_nonagg
Example1:
>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.pipeline import *  # NOQA
>>> verbose = True
>>> qreq_, qaid, daid, args = plh.testdata_sparse_matchinfo_nonagg(p=['default:codename=vsone'])
>>> qfx2_idx, qfx2_valid0, qfx2_score_list, qfx2_valid_list, qfx2_normk_list, Knorm = args
>>> # execute function
>>> vmt = get_sparse_matchinfo_nonagg(qreq_, *args)
>>> # check results
>>> assert ut.allsame(list(map(len, vmt[:-2]))), 'need same num rows'
>>> ut.assert_inbounds(vmt.dfx, -1, qreq_.ibs.get_annot_num_feats(qaid, config2_=qreq_.qparams))
>>> ut.assert_inbounds(vmt.qfx, -1, qreq_.ibs.get_annot_num_feats(daid, config2_=qreq_.qparams))
>>> ut.quit_if_noshow()
>>> daid_list = [daid]
>>> vmt_list = [vmt]
>>> cm = chip_match.ChipMatch.from_vsone_match_tup(vmt_list, daid_list=daid_list, qaid=qaid)
>>> cm.assert_self(verbose=False)
>>> ut.quit_if_noshow()
>>> cm.show_single_annotmatch(qreq_)
>>> ut.show_if_requested()
Example0:
>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.pipeline import *  # NOQA
>>> verbose = True
>>> qreq_, qaid, daid, args = plh.testdata_sparse_matchinfo_nonagg(
>>>     defaultdb='PZ_MTEST', p=['default:Knorm=3,normalizer_rule=name,const_on=True,ratio_thresh=.2,sqrd_dist_on=True'])
>>> qfx2_idx, qfx2_valid0, qfx2_score_list, qfx2_valid_list, qfx2_normk_list, Knorm = args
>>> # execute function
>>> vmt = get_sparse_matchinfo_nonagg(qreq_, *args)
>>> # check results
>>> assert ut.allsame(list(map(len, vmt[:-2]))), 'need same num rows'
>>> ut.assert_inbounds(vmt.qfx, -1, qreq_.ibs.get_annot_num_feats(qaid, config2_=qreq_.qparams))
>>> ut.assert_inbounds(vmt.dfx, -1, np.array(qreq_.ibs.get_annot_num_feats(vmt.daid, config2_=qreq_.qparams)))
>>> cm = chip_match.ChipMatch.from_vsmany_match_tup(vmt, qaid=qaid)
>>> cm.assert_self(verbose=False)
>>> ut.quit_if_noshow()
>>> cm.show_single_annotmatch(qreq_)
>>> ut.show_if_requested()
ibeis.algo.hots.pipeline.nearest_neighbor_cacheid2(qreq_, Kpad_list)[source]

Returns a hacky cacheid for neighbor configs. DEPRICATE: This will be replaced by dtool caching

Parameters:
  • qreq (QueryRequest) – query request object with hyper-parameters
  • Kpad_list (list) –
Returns:

(nn_mid_cacheid_list, nn_cachedir)

Return type:

tuple

CommandLine:

python -m ibeis.algo.hots.pipeline --exec-nearest_neighbor_cacheid2
python -m ibeis.algo.hots.pipeline --exec-nearest_neighbor_cacheid2 --superstrict

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.pipeline import *  # NOQA
>>> import ibeis
>>> verbose = True
>>> cfgdict = dict(K=4, Knorm=1, use_k_padding=False)
>>> # test 1
>>> p = 'default' + ut.get_cfg_lbl(cfgdict)
>>> qreq_ = ibeis.testdata_qreq_(defaultdb='testdb1', p=[p], qaid_override=[1, 2], daid_override=[1, 2, 3, 4, 5])
>>> locals_ = plh.testrun_pipeline_upto(qreq_, 'nearest_neighbors')
>>> Kpad_list, = ut.dict_take(locals_, ['Kpad_list'])
>>> tup = nearest_neighbor_cacheid2(qreq_, Kpad_list)
>>> (nn_cachedir, nn_mid_cacheid_list) = tup
>>> result1 = 'nn_mid_cacheid_list = ' + ut.list_str(nn_mid_cacheid_list)
>>> # test 2
>>> cfgdict2 = dict(K=2, Knorm=3, use_k_padding=True)
>>> p2 = 'default' + ut.get_cfg_lbl(cfgdict)
>>> ibs = qreq_.ibs
>>> qreq_ = ibeis.testdata_qreq_(defaultdb='testdb1', p=[p2], qaid_override=[1, 2], daid_override=[1, 2, 3, 4, 5])
>>> locals_ = plh.testrun_pipeline_upto(qreq_, 'nearest_neighbors')
>>> Kpad_list, = ut.dict_take(locals_, ['Kpad_list'])
>>> tup = nearest_neighbor_cacheid2(qreq_, Kpad_list)
>>> (nn_cachedir, nn_mid_cacheid_list) = tup
>>> result2 = 'nn_mid_cacheid_list = ' + ut.list_str(nn_mid_cacheid_list)
>>> print(result1)
>>> print(result2)
ibeis.algo.hots.pipeline.nearest_neighbors(qreq_, Kpad_list, verbose=False)[source]

Plain Nearest Neighbors

CommandLine:

python -m ibeis.algo.hots.pipeline --test-nearest_neighbors
python -m ibeis.algo.hots.pipeline --test-nearest_neighbors --db PZ_MTEST --qaids=1:100
utprof.py -m ibeis.algo.hots.pipeline --test-nearest_neighbors --db PZ_MTEST --qaids=1:100

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.pipeline import *  # NOQA
>>> import ibeis
>>> verbose = True
>>> qreq_ = ibeis.testdata_qreq_(defaultdb='testdb1', qaid_override=[1, 2, 3])
>>> ibs = qreq_.ibs
>>> locals_ = plh.testrun_pipeline_upto(qreq_, 'nearest_neighbors')
>>> Kpad_list, = ut.dict_take(locals_, ['Kpad_list'])
>>> # execute function
>>> nn_list = nearest_neighbors(qreq_, Kpad_list, verbose=verbose)
>>> (qfx2_idx, qfx2_dist) = nn_list[0]
>>> num_neighbors = Kpad_list[0] + qreq_.qparams.K + qreq_.qparams.Knorm
>>> # Assert nns tuple is valid
>>> ut.assert_eq(qfx2_idx.shape, qfx2_dist.shape)
>>> ut.assert_eq(qfx2_idx.shape[1], num_neighbors)
>>> ut.assert_inbounds(qfx2_idx.shape[0], 1000, 3000)
ibeis.algo.hots.pipeline.nearest_neighbors_withcache(qreq_, Kpad_list, verbose=False)[source]

Tries to load nearest neighbors from a cache instead of recomputing them.

ibeis.algo.hots.pipeline.request_ibeis_query_L0(ibs, qreq_, verbose=False)[source]

Driver logic of query pipeline

Note

Make sure _pipeline_helpres.testrun_pipeline_upto reflects what happens in this function.

Parameters:
  • ibs (ibeis.IBEISController) – IBEIS database object to be queried. technically this object already lives inside of qreq_.
  • qreq (ibeis.QueryRequest) – hyper-parameters. use ibs.new_query_request to create one
Returns:

cm_list containing ibeis.ChipMatch objects

Return type:

list

CommandLine:

python -m ibeis.algo.hots.pipeline --test-request_ibeis_query_L0:0 --show
python -m ibeis.algo.hots.pipeline --test-request_ibeis_query_L0:1 --show

python -m ibeis.algo.hots.pipeline --test-request_ibeis_query_L0:0 --db testdb1 --qaid 325
python -m ibeis.algo.hots.pipeline --test-request_ibeis_query_L0:0 --db testdb3 --qaid 325
# background match
python -m ibeis.algo.hots.pipeline --test-request_ibeis_query_L0:0 --db NNP_Master3 --qaid 12838

python -m ibeis.algo.hots.pipeline --test-request_ibeis_query_L0:0
python -m ibeis.algo.hots.pipeline --test-request_ibeis_query_L0:0 --db PZ_MTEST -a timectrl:qindex=0:256
python    -m ibeis.algo.hots.pipeline --test-request_ibeis_query_L0:0 --db PZ_Master1 -a timectrl:qindex=0:256
utprof.py -m ibeis.algo.hots.pipeline --test-request_ibeis_query_L0:0 --db PZ_Master1 -a timectrl:qindex=0:256
Example1:
>>> # ENABLE_DOCTEST
>>> # one-vs-many:
>>> from ibeis.algo.hots.pipeline import *  # NOQA
>>> import ibeis
>>> qreq_ = ibeis.init.main_helpers.testdata_qreq_(a=['default:qindex=0:2,dindex=0:10'])
>>> ibs = qreq_.ibs
>>> print(qreq_.qparams.query_cfgstr)
>>> verbose = True
>>> cm_list = request_ibeis_query_L0(ibs, qreq_, verbose=verbose)
>>> cm = cm_list[0]
>>> ut.quit_if_noshow()
>>> cm.ishow_analysis(qreq_, fnum=0, make_figtitle=True)
>>> ut.show_if_requested()
Example2:
>>> # ENABLE_DOCTEST
>>> # one-vs-one:
>>> from ibeis.algo.hots.pipeline import *  # NOQA
>>> import ibeis  # NOQA
>>> cfgdict1 = dict(codename='vsone', sv_on=False)
>>> p = 'default' + ut.get_cfg_lbl(cfgdict1)
>>> qreq_1 = ibeis.testdata_qreq_(defaultdb='testdb1', p=[p])
>>> ibs1 = qreq_1.ibs
>>> print(qreq_1.qparams.query_cfgstr)
>>> cm_list1 = request_ibeis_query_L0(ibs1, qreq_1)
>>> cm1 = cm_list1[0]
>>> ut.quit_if_noshow()
>>> cm1.ishow_analysis(qreq_1, fnum=1, make_figtitle=True)
>>> ut.show_if_requested()
ibeis.algo.hots.pipeline.spatial_verification(qreq_, cm_list_FILT, verbose=False)[source]

pipeline step 5 - spatially verify feature matches

Returns:cm_listSVER - new list of spatially verified chipmatches
Return type:list

CommandLine:

python -m ibeis.algo.hots.pipeline --test-spatial_verification --show
python -m ibeis.algo.hots.pipeline --test-spatial_verification --show --qaid 1
python -m ibeis.algo.hots.pipeline --test-spatial_verification:0

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.pipeline import *  # NOQA
>>> ibs, qreq_, cm_list = plh.testdata_pre_sver('PZ_MTEST', qaid_list=[18])
>>> scoring.score_chipmatch_list(qreq_, cm_list, qreq_.qparams.prescore_method)  # HACK
>>> cm = cm_list[0]
>>> top_nids = cm.get_top_nids(6)
>>> verbose = True
>>> cm_list_SVER = spatial_verification(qreq_, cm_list)
>>> # Test Results
>>> cmSV = cm_list_SVER[0]
>>> scoring.score_chipmatch_list(qreq_, cm_list_SVER, qreq_.qparams.score_method)  # HACK
>>> top_nids_SV = cmSV.get_top_nids(6)
>>> cm.print_csv(sort=True)
>>> cmSV.print_csv(sort=False)
>>> gt_daids  = np.intersect1d(cm.get_groundtruth_daids(), cmSV.get_groundtruth_daids())
>>> fm_list   = cm.get_annot_fm(gt_daids)
>>> fmSV_list = cmSV.get_annot_fm(gt_daids)
>>> maplen = lambda list_: np.array(list(map(len, list_)))
>>> assert len(gt_daids) > 0, 'ground truth did not survive'
>>> ut.assert_lessthan(maplen(fmSV_list), maplen(fm_list)), 'feature matches were not filtered'
>>> ut.quit_if_noshow()
>>> cmSV.show_daids_matches(qreq_, gt_daids)
>>> import plottool as pt
>>> #homog_tup = (refined_inliers, H)
>>> #aff_tup = (aff_inliers, Aff)
>>> #pt.draw_sv.show_sv(rchip1, rchip2, kpts1, kpts2, fm, aff_tup=aff_tup, homog_tup=homog_tup, refine_method=refine_method)
>>> ut.show_if_requested()
ibeis.algo.hots.pipeline.sver_single_chipmatch(qreq_, cm)[source]

Spatially verifies a shortlist of a single chipmatch

TODO: move to chip match?

loops over a shortlist of results for a specific query annotation

Parameters:
Returns:

cmSV

Return type:

ibeis.ChipMatch

CommandLine:

python -m ibeis --tf draw_rank_cdf --db PZ_Master1 --show \
    -t best:refine_method=[homog,affine,cv2-homog,cv2-ransac-homog,cv2-lmeds-homog] \
    -a timectrlhard ---acfginfo --veryverbtd

python -m ibeis --tf draw_rank_cdf --db PZ_Master1 --show \
    -t best:refine_method=[homog,cv2-lmeds-homog],full_homog_checks=[True,False] \
    -a timectrlhard ---acfginfo --veryverbtd

python -m ibeis --tf sver_single_chipmatch --show \
    -t default:full_homog_checks=True -a default --qaid 18

python -m ibeis --tf sver_single_chipmatch --show \
    -t default:refine_method=affine -a default --qaid 18

python -m ibeis --tf sver_single_chipmatch --show \
    -t default:refine_method=cv2-homog -a default --qaid 18

python -m ibeis --tf sver_single_chipmatch --show \
    -t default:refine_method=cv2-homog,full_homog_checks=True -a default --qaid 18

python -m ibeis --tf sver_single_chipmatch --show \
    -t default:refine_method=cv2-homog,full_homog_checks=False -a default --qaid 18

python -m ibeis --tf sver_single_chipmatch --show \
    -t default:refine_method=cv2-lmeds-homog,full_homog_checks=False -a default --qaid 18

python -m ibeis --tf sver_single_chipmatch --show \
    -t default:refine_method=cv2-ransac-homog,full_homog_checks=False -a default --qaid 18

python -m ibeis --tf sver_single_chipmatch --show \
    -t default:full_homog_checks=False -a default --qaid 18

python -m ibeis --tf sver_single_chipmatch --show --qaid=18 --y=0
python -m ibeis --tf sver_single_chipmatch --show --qaid=18 --y=1

Example

>>> # DISABLE_DOCTEST
>>> # Visualization
>>> from ibeis.algo.hots.pipeline import *  # NOQA
>>> qreq_, args = plh.testdata_pre('spatial_verification', defaultdb='PZ_MTEST')  #, qaid_list=[18])
>>> cm_list = args.cm_list_FILT
>>> ibs = qreq_.ibs
>>> cm = cm_list[0]
>>> scoring.score_chipmatch_list(qreq_, cm_list, qreq_.qparams.prescore_method)  # HACK
>>> #locals_ = ut.exec_func_src(sver_single_chipmatch, key_list=['svtup_list'], sentinal='# <SENTINAL>')
>>> #svtup_list1, = locals_
>>> source = ut.get_func_sourcecode(sver_single_chipmatch, stripdef=True, strip_docstr=True)
>>> source = ut.replace_between_tags(source, '', '# <SENTINAL>', '# </SENTINAL>')
>>> globals_ = globals().copy()
>>> exec(source, globals_)
>>> svtup_list = globals_['svtup_list']
>>> gt_daids = cm.get_groundtruth_daids()
>>> x = ut.get_argval('--y', type_=int, default=0)
>>> #print('x = %r' % (x,))
>>> #daid = daids[x % len(daids)]
>>> notnone_list = ut.not_list(ut.flag_None_items(svtup_list))
>>> valid_idxs = np.where(notnone_list)
>>> valid_daids = cm.daid_list[valid_idxs]
>>> assert len(valid_daids) > 0, 'cannot spatially verify'
>>> valid_gt_daids = np.intersect1d(gt_daids, valid_daids)
>>> #assert len(valid_gt_daids) == 0, 'no sver groundtruth'
>>> daid = valid_gt_daids[x] if len(valid_gt_daids) > 0 else valid_daids[x]
>>> idx = cm.daid2_idx[daid]
>>> svtup = svtup_list[idx]
>>> assert svtup is not None, 'SV TUP IS NONE'
>>> refined_inliers, refined_errors, H = svtup[0:3]
>>> aff_inliers, aff_errors, Aff = svtup[3:6]
>>> homog_tup = (refined_inliers, H)
>>> aff_tup = (aff_inliers, Aff)
>>> fm = cm.fm_list[idx]
>>> aid1 = cm.qaid
>>> aid2 = daid
>>> rchip1, = ibs.get_annot_chips([aid1], config2_=qreq_.get_external_query_config2())
>>> kpts1,  = ibs.get_annot_kpts([aid1], config2_=qreq_.get_external_query_config2())
>>> rchip2, = ibs.get_annot_chips([aid2], config2_=qreq_.get_external_data_config2())
>>> kpts2, = ibs.get_annot_kpts([aid2], config2_=qreq_.get_external_data_config2())
>>> import plottool as pt
>>> show_aff = not ut.get_argflag('--noaff')
>>> refine_method = qreq_.qparams.refine_method if not ut.get_argflag('--norefinelbl') else ''
>>> pt.draw_sv.show_sv(rchip1, rchip2, kpts1, kpts2, fm, aff_tup=aff_tup,
>>>                    homog_tup=homog_tup, show_aff=show_aff,
>>>                    refine_method=refine_method)
>>> ut.show_if_requested()
ibeis.algo.hots.pipeline.vsone_reranking(qreq_, cm_list_SVER, verbose=False)[source]

CommandLine:

python -m ibeis.algo.hots.pipeline --test-vsone_reranking
python -m ibeis.algo.hots.pipeline --test-vsone_reranking --show
Example2:
>>> # SLOW_DOCTEST (IMPORTANT)
>>> from ibeis.algo.hots.pipeline import *  # NOQA
>>> import ibeis
>>> cfgdict = dict(prescore_method='nsum', score_method='nsum', vsone_reranking=True)
>>> p = 'default' + ut.get_cfg_lbl(cfgdict)
>>> qreq_ = ibeis.testdata_qreq_(defaultdb='PZ_MTEST', p=[p], qaid_override=[2])
>>> ibs = qreq_.ibs
>>> locals_ = plh.testrun_pipeline_upto(qreq_, 'vsone_reranking')
>>> cm_list = locals_['cm_list_SVER']
>>> verbose = True
>>> cm_list_VSONE = vsone_reranking(qreq_, cm_list, verbose=verbose)
>>> ut.quit_if_noshow()
>>> from ibeis.algo.hots import vsone_pipeline
>>> import plottool as pt
>>> # NOTE: the aid2_score field must have been hacked
>>> vsone_pipeline.show_top_chipmatches(ibs, cm_list, 0,  'prescore')
>>> vsone_pipeline.show_top_chipmatches(ibs, cm_list_VSONE,   1, 'vsone-reranked')
>>> pt.show_if_requested()
ibeis.algo.hots.pipeline.weight_neighbors(qreq_, nns_list, nnvalid0_list, verbose=False)[source]

pipeline step 3 - assigns weights to feature matches based on the active filter list

CommandLine:

python -m ibeis.algo.hots.pipeline --test-weight_neighbors
python -m ibeis.algo.hots.pipeline --test-weight_neighbors:0 --verbose --verbtd --ainfo --nocache --veryverbose
python -m ibeis.algo.hots.pipeline --test-weight_neighbors:0 --show
python -m ibeis.algo.hots.pipeline --test-weight_neighbors:1 --show

python -m ibeis.algo.hots.pipeline --test-weight_neighbors:0 --show -t default:lnbnn_normer=lnbnn_fg_0.9__featscore,lnbnn_norm_thresh=.9

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.pipeline import *  # NOQA
>>> qreq_, args = plh.testdata_pre('weight_neighbors', defaultdb='testdb1',
>>>                                a=['default:qindex=0:3,dindex=0:5,hackerrors=False'],
>>>                                p=['default:codename=vsmany,bar_l2_on=True,fg_on=False'], verbose=True)
>>> nns_list, nnvalid0_list = args
>>> verbose = True
>>> # execute function
>>> weight_ret = weight_neighbors(qreq_, nns_list, nnvalid0_list, verbose)
>>> filtkey_list, filtweights_list, filtvalids_list, filtnormks_list = weight_ret
>>> import plottool as pt
>>> verbose = True
>>> cm_list = build_chipmatches(
>>>     qreq_, nns_list, nnvalid0_list, filtkey_list, filtweights_list,
>>>     filtvalids_list, filtnormks_list, verbose=verbose)
>>> ut.quit_if_noshow()
>>> cm = cm_list[0]
>>> cm.score_nsum(qreq_)
>>> cm.ishow_analysis(qreq_)
>>> ut.show_if_requested()

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.pipeline import *  # NOQA
>>> qreq_, args = plh.testdata_pre('weight_neighbors', defaultdb='testdb1',
>>>                                a=['default:qindex=0:3,dindex=0:5,hackerrors=False'],
>>>                                p=['default:codename=vsmany,bar_l2_on=True,fg_on=False'], verbose=True)
>>> nns_list, nnvalid0_list = args
>>> verbose = True
>>> # execute function
>>> weight_ret = weight_neighbors(qreq_, nns_list, nnvalid0_list, verbose)
>>> filtkey_list, filtweights_list, filtvalids_list, filtnormks_list = weight_ret
>>> nInternAids = len(qreq_.get_internal_qaids())
>>> nFiltKeys = len(filtkey_list)
>>> filtweight_depth = ut.depth_profile(filtweights_list)
>>> filtvalid_depth = ut.depth_profile(filtvalids_list)
>>> ut.assert_eq(nInternAids, len(filtweights_list))
>>> ut.assert_eq(nInternAids, len(filtvalids_list))
>>> ut.assert_eq(ut.get_list_column(filtweight_depth, 0), [nFiltKeys] * nInternAids)
>>> ut.assert_eq(filtvalid_depth, (nInternAids, nFiltKeys))
>>> ut.assert_eq(filtvalids_list, [[None, None], [None, None], [None, None]])
>>> ut.assert_eq(filtkey_list, [hstypes.FiltKeys.LNBNN, hstypes.FiltKeys.BARL2])
>>> ut.quit_if_noshow()
>>> import plottool as pt
>>> verbose = True
>>> cm_list = build_chipmatches(
>>>     qreq_, nns_list, nnvalid0_list, filtkey_list, filtweights_list,
>>>     filtvalids_list, filtnormks_list, verbose=verbose)
>>> cm = cm_list[0]
>>> cm.score_nsum(qreq_)
>>> cm.ishow_analysis(qreq_)
>>> ut.show_if_requested()

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.pipeline import *  # NOQA
>>> qreq_, args = plh.testdata_pre('weight_neighbors', defaultdb='testdb1',
>>>                                a=['default:qindex=0:1,dindex=0:5,hackerrors=False'],
>>>                                p=['default:codename=vsone,fg_on=False,ratio_thresh=.625'], verbose=True)
>>> nns_list, nnvalid0_list = args
>>> ibs = qreq_.ibs
>>> weight_ret = weight_neighbors(qreq_, nns_list, nnvalid0_list)
>>> filtkey_list, filtweights_list, filtvalids_list, filtnormks_list = weight_ret
>>> nFiltKeys = len(filtkey_list)
>>> nInternAids = len(qreq_.get_internal_qaids())
>>> filtweight_depth = ut.depth_profile(filtweights_list)
>>> filtvalid_depth = ut.depth_profile(filtvalids_list)
>>> ut.assert_eq(nInternAids, len(filtweights_list))
>>> ut.assert_eq(nInternAids, len(filtvalids_list))
>>> target = [nFiltKeys] * nInternAids
>>> ut.assert_eq(ut.get_list_column(filtweight_depth, 0), target)
>>> ut.assert_eq(filtkey_list, [hstypes.FiltKeys.RATIO])
>>> assert filtvalids_list[0][0] is not None
>>> ut.quit_if_noshow()
>>> import plottool as pt
>>> verbose = True
>>> cm_list = build_chipmatches(
>>>     qreq_, nns_list, nnvalid0_list, filtkey_list, filtweights_list,
>>>     filtvalids_list, filtnormks_list, verbose=verbose)
>>> cm = cm_list[0]
>>> cm.score_nsum(qreq_)
>>> cm.ishow_analysis(qreq_)
>>> ut.show_if_requested()

ibeis.algo.hots.precision_recall module

TODO: DEPRICATE WITH QRES

IBEIS AGNOSTIC DEFINITIONS ARE NOW IN VTOOL

ibeis.algo.hots.precision_recall.draw_precision_recall_curve_(recall_range_, p_interp_curve, title_pref=None, fnum=1)[source]
ibeis.algo.hots.precision_recall.get_average_percision_(qres, ibs=None, gt_aids=None)[source]

gets average percision using the PASCAL definition

FIXME: Use only the groundtruth that could have been matched in the database. (shouldn’t be an issue until we start using daid subsets)

References

http://en.wikipedia.org/wiki/Information_retrieval

ibeis.algo.hots.precision_recall.get_interpolated_precision_vs_recall_(qres, ibs=None, gt_aids=None)[source]
ibeis.algo.hots.precision_recall.get_nFalseNegative(TP, atrank, nGroundTruth)[source]

the number of documents we should have retrieved but didn’t

ibeis.algo.hots.precision_recall.get_nFalsePositive(TP, atrank)[source]

the number of documents we should not have retrieved

ibeis.algo.hots.precision_recall.get_nTruePositive(atrank, was_retrieved, gt_ranks)[source]

the number of documents we got right

ibeis.algo.hots.precision_recall.get_precision(TP, FP)[source]

precision positive predictive value

ibeis.algo.hots.precision_recall.get_precision_recall_curve_(qres, ibs=None, gt_aids=None)[source]

CommandLine:

python -m ibeis.algo.hots.precision_recall --test-get_precision_recall_curve_ --show

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.hots_query_result import *  # NOQA
>>> import ibeis
>>> ibs = ibeis.opendb('PZ_MTEST')
>>> qaids = ibs.get_valid_aids()[14:15]
>>> daids = ibs.get_valid_aids()
>>> qres = ibs.query_chips(qaids, daids)[0]
>>> gt_aids = None
>>> atrank  = 18
>>> nSamples = 20
>>> ofrank_curve, precision_curve, recall_curve = qres.get_precision_recall_curve(ibs=ibs, gt_aids=gt_aids)
>>> recall_range_, p_interp_curve = interpolate_precision_recall_(precision_curve, recall_curve, nSamples=nSamples)
>>> print((recall_range_, p_interp_curve))
>>> ut.quit_if_noshow()
>>> draw_precision_recall_curve_(recall_range_, p_interp_curve)
>>> ut.show_if_requested()

References

http://en.wikipedia.org/wiki/Precision_and_recall

ibeis.algo.hots.precision_recall.get_recall(TP, FN)[source]

recall, true positive rate, sensitivity, hit rate

ibeis.algo.hots.precision_recall.interpolate_precision_recall_(precision_curve, recall_curve, nSamples=11)[source]
ibeis.algo.hots.precision_recall.show_precision_recall_curve_(qres, ibs=None, gt_aids=None, fnum=1)[source]

CHANGE NAME TO REFERENCE QRES

ibeis.algo.hots.qt_inc_automatch module

CommandLine:

>>> # Profile
utprof.py -m ibeis.algo.hots.qt_inc_automatch --test-test_inc_query:3 --num-init 5000 --stateful-query
utprof.py -m ibeis.algo.hots.qt_inc_automatch --test-test_inc_query:0

python -m ibeis.algo.hots.qt_inc_automatch --test-test_inc_query:3 --ia 0
python -m ibeis.algo.hots.qt_inc_automatch --test-test_inc_query:3 --ia 0 --force-serial

CommandLine:

>>> # Autonomous Test
python -m ibeis.algo.hots.qt_inc_automatch --test-test_inc_query:0

CommandLine:

>>> # Interactive Test
python -m ibeis.algo.hots.qt_inc_automatch --test-test_inc_query:1 --ia 0 --aid_order=same
python -m ibeis.algo.hots.qt_inc_automatch --test-test_inc_query:0 --ia 0
TODO:
  • spatially constrained matching

  • benchmarks for reconstruction vs addition.

    • accuracy

      compares percent of correct error = (qfx2_idx_reindex != qfx2_idx_append).sum()

    • time
      compares time of
      • addition
      • reindex
      • multindex search
      • regular search

      error = (qfx2_idx_reindex != qfx2_idx_append).sum()

#import numpy as np #import sys #from ibeis.algo.hots import automated_oracle as ao #from ibeis.algo.hots import automated_helpers as ah #from ibeis.algo.hots import special_query *

class ibeis.algo.hots.qt_inc_automatch.IncQueryHarness[source]

Bases: PyQt4.QtCore.QObject

Provides incremental and interactive query with a way to work around hitting the recusion limit.

TODO: maybe abstract this into a interuptable loop harness

begin_incremental_query(ibs, qaid_list, back=None)[source]

runs incremental query in live mode

exemplar_decision_signal
exemplar_decision_slot(exemplar_decision)[source]
name_decision_signal
name_decision_slot(chosen_names)[source]

the name decision signal was emited

next_query_signal
next_query_slot()[source]

callback used when all interactions are completed. Generates the next incremental query and then tries the automatic interactions

setup_back_callbacks(back, incinfo)[source]
test_incremental_query(ibs_gt, ibs, aid_list1, aid1_to_aid2, num_initial=0, interactive_after=None, back=None)[source]

Adds and queries new annotations one at a time with oracle guidance

ibeis.algo.hots.qt_inc_automatch.exec_interactive_incremental_queries(ibs, qaid_list, back=None)[source]
ibeis.algo.hots.qt_inc_automatch.incremental_test_qt(ibs, num_initial=0)[source]

CommandLine:

python -m ibeis.algo.hots.qt_inc_automatch --test-incremental_test_qt

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.qt_inc_automatch import *  # NOQA
>>> main_locals = ibeis.main(db='testdb1')
>>> ibs = main_locals['ibs']
>>> back = main_locals['back']
>>> #num_initial = 0
>>> num_initial = 0
>>> incremental_test_qt(ibs, num_initial)
>>> pt.present()
>>> execstr = ibeis.main_loop(main_locals)
>>> print(execstr)
ibeis.algo.hots.qt_inc_automatch.test_inc_query(ibs_gt, num_initial=0)[source]

entry point for interactive query tests test_interactive_incremental_queries

Parameters:
  • ibs (list) – IBEISController object
  • qaid_list (list) – list of annotation-ids to query

CommandLine:

python dev.py -t inc --db PZ_MTEST --qaid 1:30:3 --cmd
python dev.py --db PZ_MTEST --allgt --cmd
python dev.py --db PZ_MTEST --allgt -t inc
python dev.py --db PZ_MTEST --allgt -t inc

CommandLine:

python -m ibeis.algo.hots.qt_inc_automatch --test-test_inc_query:0  --interact-after 444440 --noqcache
python -m ibeis.algo.hots.qt_inc_automatch --test-test_inc_query:1  --interact-after 444440 --noqcache

python -m ibeis.algo.hots.qt_inc_automatch --test-test_inc_query:0
python -m ibeis.algo.hots.qt_inc_automatch --test-test_inc_query:1
python -m ibeis.algo.hots.qt_inc_automatch --test-test_inc_query:2
python -m ibeis.algo.hots.qt_inc_automatch --test-test_inc_query:3

utprof.py -m ibeis.algo.hots.qt_inc_automatch --test-test_inc_query:3 --ninit 5000
utprof.py -m ibeis.algo.hots.qt_inc_automatch --test-test_inc_query:0

# Writes out test script
python -c "import utool as ut; ut.write_modscript_alias('Tinc.sh', 'ibeis.algo.hots.qt_inc_automatch')"

sh Tinc.sh --test-test_inc_query:0
sh Tinc.sh --test-test_inc_query:1
sh Tinc.sh --test-test_inc_query:2
sh Tinc.sh --test-test_inc_query:3

utprof.py -m ibeis.algo.hots.qt_inc_automatch --test-test_inc_query:3 --num-init 5000

sh Tinc.sh --test-test_inc_query:0 --ninit 10
sh Tinc.sh --test-test_inc_query:0 --ninit 10 --verbose-debug --verbose-helpful

python -m ibeis.algo.hots.qt_inc_automatch --test-test_inc_query:0 --ia 10

# Runs into a merge case
python -m ibeis.algo.hots.qt_inc_automatch --test-test_inc_query:0 --ia 30
Example0:
>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.automated_matcher import *  # NOQA
>>> ibs_gt = ibeis.opendb('testdb1')
>>> test_inc_query(ibs_gt)
Example1:
>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.automated_matcher import *  # NOQA
>>> ibs_gt = ibeis.opendb('PZ_MTEST')
>>> test_inc_query(ibs_gt)
Example2:
>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.automated_matcher import *  # NOQA
>>> ibs_gt = ibeis.opendb('GZ_ALL')
>>> test_inc_query(ibs_gt)
Example3:
>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.automated_matcher import *  # NOQA
>>> ibs_gt = ibeis.opendb('PZ_Master0')
>>> test_inc_query(ibs_gt)

ibeis.algo.hots.query_params module

class ibeis.algo.hots.query_params.QueryParams(qparams, query_cfg=None, cfgdict=None)[source]

Bases: _abcoll.Mapping

__getstate__(qparams)[source]

Make QueryRequest pickleable

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.query_params import *  # NOQA
>>> from six.moves import cPickle as pickle
>>> qparams = testdata_queryparams()
>>> qparams_dump = pickle.dumps(qparams)
>>> qparams2 = pickle.loads(qparams_dump)
copy(qparams)[source]
get(qparams, key, *d)[source]

get a paramater value by string

get_postsver_filtkey_list(qparams)[source]

HACK: gets columns of fsv post spatial verification. This will eventually be incorporated into cmtup_old instead and will not be dependant on specifically where you are in the pipeline

ibeis.algo.hots.query_params.testdata_queryparams()[source]

ibeis.algo.hots.query_request module

TODO:
replace with dtool Rename to IdentifyRequest
class ibeis.algo.hots.query_request.QueryRequest(qreq_)[source]

Bases: object

Request object for pipline paramter run

__getstate__(qreq_)[source]

Make QueryRequest pickleable

CommandLine:

python -m ibeis.dev -t candidacy --db testdb1

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.query_request import *  # NOQA
>>> from six.moves import cPickle as pickle
>>> qreq_, ibs = testdata_qreq()
>>> qreq_dump = pickle.dumps(qreq_)
>>> qreq2_ = pickle.loads(qreq_dump)
add_internal_daids(qreq_, new_daids)[source]

State Modification: add new daid to query request. Should only be done between query pipeline runs

assert_self(qreq_, ibs)[source]
daids
dnids

TODO – save dnids in qreq_ state

ensure_chips(qreq_, verbose=True, extra_tries=1)[source]

ensure chips are computed (used in expt, not used in pipeline)

Parameters:
  • verbose (bool) – verbosity flag(default = True)
  • extra_tries (int) – (default = 0)

CommandLine:

python -m ibeis.algo.hots.query_request --test-ensure_chips

Example

>>> # ENABLE_DOCTEST
>>> # Delete chips (accidentally) then try to run a query
>>> from ibeis.algo.hots.query_request import *  # NOQA
>>> import ibeis
>>> ibs = ibeis.opendb(defaultdb='testdb1')
>>> daids = ibs.get_valid_aids()[0:3]
>>> qaids = ibs.get_valid_aids()[0:6]
>>> qreq_ = ibs.new_query_request(qaids, daids)
>>> verbose = True
>>> extra_tries = 1
>>> qchip_fpaths = ibs.get_annot_chip_fpath(qaids, config2_=qreq_.extern_query_config2)
>>> dchip_fpaths = ibs.get_annot_chip_fpath(daids, config2_=qreq_.extern_data_config2)
>>> ut.remove_file_list(qchip_fpaths)
>>> ut.remove_file_list(dchip_fpaths)
>>> result = qreq_.ensure_chips(verbose, extra_tries)
>>> print(result)
ensure_features(qreq_, verbose=True)[source]

ensure features are computed :param verbose: verbosity flag(default = True)

CommandLine:

python -m ibeis.algo.hots.query_request --test-ensure_features

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.query_request import *  # NOQA
>>> import ibeis
>>> ibs = ibeis.opendb(defaultdb='testdb1')
>>> daids = ibs.get_valid_aids()[0:3]
>>> qaids = ibs.get_valid_aids()[0:6]
>>> qreq_ = ibs.new_query_request(qaids, daids)
>>> ibs.delete_annot_feats(qaids,  config2_=qreq_.get_external_query_config2())  # Remove the chips
>>> ut.remove_file_list(ibs.get_annot_chip_fpath(qaids, config2_=qreq_.get_external_query_config2()))
>>> verbose = True
>>> result = qreq_.ensure_features(verbose)
>>> print(result)
ensure_featweights(qreq_, verbose=True)[source]

ensure feature weights are computed

execute(qreq_, qaids=None)[source]
execute_subset(qreq_, qaids=None)[source]
extern_data_config2
extern_query_config2
get_cfgstr(qreq_, with_input=False, with_data=True, with_pipe=True, hash_pipe=False)[source]

main cfgstring used to identify the ‘querytype’ FIXME: name params + data

TODO:
rename query_cfgstr to pipe_cfgstr or pipeline_cfgstr EVERYWHERE
Parameters:with_input (bool) – (default = False)
Returns:cfgstr
Return type:str

CommandLine:

python -m ibeis.algo.hots.query_request --exec-get_cfgstr

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.query_request import *  # NOQA
>>> import ibeis
>>> ibs = ibeis.opendb(defaultdb='testdb1')
>>> species = ibeis.const.TEST_SPECIES.ZEB_PLAIN
>>> daids = ibs.get_valid_aids(species=species)
>>> qaids = ibs.get_valid_aids(species=species)
>>> qreq_ = ibs.new_query_request(qaids, daids)
>>> with_input = True
>>> cfgstr = qreq_.get_cfgstr(with_input)
>>> result = ('cfgstr = %s' % (str(cfgstr),))
>>> print(result)
get_chipmatch_fpaths(qreq_, qaid_list)[source]

Efficient function to get a list of chipmatch paths

get_data_hashid(qreq_)[source]
get_external_daids(qreq_)[source]

These are the users daids in vsone mode

get_external_data_config2(qreq_)[source]
get_external_duuids(qreq_)[source]

These are the users qauuids in vsone mode

get_external_qaids(qreq_)[source]

These are the users qaids in vsone mode

get_external_query_config2(qreq_)[source]
get_external_query_groundtruth(qreq_, qaids)[source]

gets groundtruth that are accessible via this query

get_external_quuids(qreq_)[source]

These are the users qauuids in vsone mode

get_full_cfgstr(qreq_)[source]

main cfgstring used to identify the ‘querytype’ FIXME: name params + data + query

get_infostr(qreq_)[source]
get_internal_daids(qreq_)[source]
get_internal_data_config2(qreq_)[source]
get_internal_data_hashid(qreq_)[source]
get_internal_duuids(qreq_)[source]
get_internal_qaids(qreq_)[source]
get_internal_query_config2(qreq_)[source]
get_internal_query_hashid(qreq_)[source]
get_internal_quuids(qreq_)[source]
get_pipe_cfgstr(qreq_)[source]

FIXME: name params only

get_pipe_hashid(qreq_)[source]
get_qresdir(qreq_)[source]
get_query_hashid(qreq_)[source]

CommandLine:

python -m ibeis.algo.hots.query_request --exec-QueryRequest.get_query_hashid --show

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.query_request import *  # NOQA
>>> import ibeis
>>> qreq_ = ibeis.testdata_qreq_()
>>> query_hashid = qreq_.get_query_hashid()
>>> result = ('query_hashid = %s' % (ut.repr2(query_hashid),))
>>> print(result)
get_result_fnames(qreq_, qaid_list)[source]

Efficient function to get a list of chipmatch paths

get_shortinfo_cfgstr(qreq_)[source]
get_shortinfo_parts(qreq_)[source]
get_unique_species(qreq_)[source]
lazy_load(qreq_, verbose=True)[source]

Performs preloading of all data needed for a batch of queries

lazy_preload(qreq_, verbose=True)[source]

feature weights and normalizers should be loaded before vsone queries are issued. They do not depened only on qparams

Load non-query specific normalizers / weights

load_distinctiveness_normalizer(qreq_, verbose=True)[source]

Example

>>> from ibeis.algo.hots import distinctiveness_normalizer
>>> verbose = True
load_indexer(qreq_, verbose=True, force=False)[source]
load_lnbnn_normalizer(qreq_, verbose=True)[source]
load_score_normalizer(qreq_, verbose=True)[source]
make_empty_chip_matches(qreq_)[source]

returns empty query results for each external qaid :returns: cm_list :rtype: list

CommandLine:

python -m ibeis.algo.hots.query_request --exec-make_empty_chip_matches

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.query_request import *  # NOQA
>>> import ibeis
>>> qreq_ = ibeis.main_helpers.testdata_qreq_()
>>> cm_list = qreq_.make_empty_chip_matches()
>>> cm = cm_list[0]
>>> cm.print_rawinfostr()
>>> result = ('cm_list = %s' % (str(cm_list),))
>>> print(result)
classmethod new_query_request(qaid_list, daid_list, qparams, qresdir, ibs, _indexer_request_params)[source]

old way of calling new

Parameters:
  • qaid_list (list) –
  • daid_list (list) –
  • qparams (QueryParams) – query hyper-parameters
  • qresdir (str) –
  • ibs (ibeis.IBEISController) – image analysis api
  • _indexer_request_params (dict) –
Returns:

ibeis.QueryRequest

qaids
qnids

TODO – save qnids in qreq_ state

remove_internal_daids(qreq_, remove_daids)[source]

State Modification: remove daids from the query request. Do not call this function often. It invalidates the indexer, which is very slow to rebuild. Should only be done between query pipeline runs.

CommandLine:

python -m ibeis.algo.hots.query_request --test-remove_internal_daids

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.query_request import *  # NOQA
>>> import ibeis
>>> # build test data
>>> ibs = ibeis.opendb('testdb1')
>>> species = ibeis.const.TEST_SPECIES.ZEB_PLAIN
>>> daids = ibs.get_valid_aids(species=species, is_exemplar=True)
>>> qaids = ibs.get_valid_aids(species=species, is_exemplar=False)
>>> qreq_ = ibs.new_query_request(qaids, daids)
>>> remove_daids = daids[0:1]
>>> # execute function
>>> assert len(qreq_.internal_daids) == 4, 'bad setup data'
>>> qreq_.remove_internal_daids(remove_daids)
>>> # verify results
>>> assert len(qreq_.internal_daids) == 3, 'did not remove'
rrr(verbose=True)

special class reloading function

set_external_daids(qreq_, daid_list)[source]
set_external_qaid_mask(qreq_, masked_qaid_list)[source]
Parameters:qaid_list (list) –

CommandLine:

python -m ibeis.algo.hots.query_request --test-set_external_qaid_mask

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.query_request import *  # NOQA
>>> import ibeis
>>> ibs = ibeis.opendb(db='testdb1')
>>> qaid_list = [1, 2, 3, 4, 5]
>>> daid_list = [1, 2, 3, 4, 5]
>>> qreq_ = ibs.new_query_request(qaid_list, daid_list)
>>> masked_qaid_list = [2, 4, 5]
>>> qreq_.set_external_qaid_mask(masked_qaid_list)
>>> result = np.array_str(qreq_.get_external_qaids())
>>> print(result)
[1 3]
set_external_qaids(qreq_, qaid_list)[source]
set_internal_daids(qreq_, daid_list)[source]
set_internal_masked_daids(qreq_, masked_daid_list)[source]

used by the pipeline to execute a subset of the query request without modifying important state

set_internal_masked_qaids(qreq_, masked_qaid_list)[source]

used by the pipeline to execute a subset of the query request without modifying important state

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.query_request import *  # NOQA
>>> import utool as ut
>>> import ibeis
>>> qaid_list = [1, 2, 3, 4]
>>> daid_list = [1, 2, 3, 4]
>>> qreq_ = ibeis.testdata_qreq_(qaid_override=qaid_list, daid_override=daid_list, p='default:codename=vsone,sv_on=True')
>>> qaids = qreq_.get_internal_qaids()
>>> ut.assert_lists_eq(qaid_list, qaids)
>>> masked_qaid_list = [1, 2, 3,]
>>> qreq_.set_internal_masked_qaids(masked_qaid_list)
>>> new_internal_aids = qreq_.get_internal_qaids()
>>> ut.assert_lists_eq(new_internal_aids, [4])
set_internal_qaids(qreq_, qaid_list)[source]
set_internal_unmasked_qaids(qreq_, unmasked_qaid_list)[source]

used by the pipeline to execute a subset of the query request without modifying important state

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.query_request import *  # NOQA
>>> import utool as ut
>>> import ibeis
>>> qaid_list = [1, 2, 3, 4]
>>> daid_list = [1, 2, 3, 4]
>>> qreq_ = ibeis.testdata_qreq_(qaid_override=qaid_list, daid_override=daid_list, p='default:codename=vsone,sv_on=True')
>>> qaids = qreq_.get_internal_qaids()
>>> ut.assert_lists_eq(qaid_list, qaids)
>>> unmasked_qaid_list = [1, 2, 3,]
>>> qreq_.set_internal_unmasked_qaids(unmasked_qaid_list)
>>> new_internal_aids = qreq_.get_internal_qaids()
>>> ut.assert_lists_eq(new_internal_aids, unmasked_qaid_list)
shallowcopy(qreq_, qaids=None, qx=None, dx=None)[source]

Creates a copy of qreq with the same qparams object and a subset of the qx and dx objects. used to generate chunks of vsone and vsmany queries

CommandLine:

python -m ibeis.algo.hots.query_request --exec-shallowcopy

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.query_request import *  # NOQA
>>> import ibeis
>>> qreq_, ibs = testdata_qreq()
>>> qreq2_ = qreq_.shallowcopy(qx=0)
>>> assert qreq_.get_external_daids() is qreq2_.get_external_daids()
>>> assert len(qreq_.get_external_qaids()) != len(qreq2_.get_external_qaids())
>>> #assert qreq_.metadata is not qreq2_.metadata
ibeis.algo.hots.query_request.apply_species_with_detector_hack(ibs, cfgdict, qaids, daids, verbose=False)[source]

HACK turns of featweights if they cannot be applied

ibeis.algo.hots.query_request.new_ibeis_query_request(ibs, qaid_list, daid_list, cfgdict=None, verbose=True, unique_species=None, use_memcache=True, query_cfg=None)[source]

ibeis entry point to create a new query request object

Returns:ibeis.QueryRequest

CommandLine:

python -m ibeis.algo.hots.query_request --test-new_ibeis_query_request:0
python -m ibeis.algo.hots.query_request --test-new_ibeis_query_request:2
Example0:
>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.query_request import *  # NOQA
>>> ibs, qaid_list, daid_list = testdata_newqreq('PZ_MTEST')
>>> unique_species = None
>>> verbose = ut.NOT_QUIET
>>> cfgdict = {'sv_on': False, 'fg_on': True}  # 'fw_detector': 'rf'}
>>> # Execute test
>>> qreq_ = new_ibeis_query_request(ibs, qaid_list, daid_list, cfgdict=cfgdict)
>>> # Check Results
>>> print(qreq_.get_cfgstr())
>>> assert qreq_.qparams.sv_on is False, (
...     'qreq_.qparams.sv_on = %r ' % qreq_.qparams.sv_on)
>>> result = ibs.get_dbname() + qreq_.get_data_hashid()
>>> print(result)
PZ_MTEST_DSUUIDS((5)kmptegpfuwaibfvt)
Example1:
>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.query_request import *  # NOQA
>>> ibs, qaid_list, daid_list = testdata_newqreq('NAUT_test')
>>> unique_species = None
>>> verbose = ut.NOT_QUIET
>>> cfgdict = {'sv_on': True, 'fg_on': True}
>>> # Execute test
>>> qreq_ = new_ibeis_query_request(ibs, qaid_list, daid_list, cfgdict=cfgdict)
>>> # Check Results.
>>> # Featweight should be off because there is no Naut detector
>>> print(qreq_.qparams.query_cfgstr)
>>> assert qreq_.qparams.sv_on is True, (
...     'qreq_.qparams.sv_on = %r ' % qreq_.qparams.sv_on)
>>> result = ibs.get_dbname() + qreq_.get_data_hashid()
>>> print(result)
NAUT_test_DSUUIDS((5)tklzzeuqjqxfbayo)
Example2:
>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.query_request import *  # NOQA
>>> ibs, qaid_list, daid_list = testdata_newqreq('PZ_MTEST')
>>> unique_species = None
>>> verbose = ut.NOT_QUIET
>>> cfgdict = {'sv_on': False, 'augment_queryside_hack': True}
>>> # Execute test
>>> qreq_ = new_ibeis_query_request(ibs, qaid_list, daid_list, cfgdict=cfgdict)
>>> # Check Results.
>>> # Featweight should be off because there is no Naut detector
>>> print(qreq_.qparams.query_cfgstr)
>>> assert qreq_.qparams.sv_on is False, (
...     'qreq_.qparams.sv_on = %r ' % qreq_.qparams.sv_on)
>>> result = ibs.get_dbname() + qreq_.get_data_hashid()
>>> print(result)
PZ_MTEST_DSUUIDS((5)kmptegpfuwaibfvt)
ibeis.algo.hots.query_request.test_cfg_deepcopy()[source]

TESTING FUNCTION

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.query_request import *  # NOQA
>>> result = test_cfg_deepcopy()
>>> print(result)
ibeis.algo.hots.query_request.testdata_newqreq(defaultdb)[source]
Returns:(ibeis.IBEISController, list, list)
ibeis.algo.hots.query_request.testdata_qreq()[source]
Returns:(ibeis.QueryRequest, ibeis.IBEISController)

ibeis.algo.hots.scorenorm module

GOALS:
  1. vsmany * works resaonable for very few and very many * stars with small k and then k becomes a percent or log percent * distinctiveness from different location
  2. 1-vs-1 * uses distinctiveness and foreground when available * start with ratio test and ransac
  3. First N decision are interactive until we learn a good threshold

4) Always show numbers between 0 and 1 spatial verification is based on single best exemplar

x - build encoder x - test encoder x - monotonicity (both nondecreasing and strictly increasing) x - cache encoder x - cache maitainance (deleters and listers) o - Incemental learning o - Spceies sensitivity
  • Add ability for user to relearn encoder from labeled database.
TODO:
class ibeis.algo.hots.scorenorm.NormFeatScoreConfig(cfg, **kwargs)[source]

Bases: dtool.base.Config

exception ibeis.algo.hots.scorenorm.UnbalancedExampleException[source]

Bases: exceptions.Exception

ibeis.algo.hots.scorenorm.compare_featscores()[source]

CommandLine:

ibeis --tf compare_featscores  --db PZ_MTEST             --nfscfg :disttype=[L2_sift,lnbnn],top_percent=[None,.5,.1] -a timectrl             -p default:K=[1,2],normalizer_rule=name             --save featscore{db}.png --figsize=13,20 --diskshow

ibeis --tf compare_featscores  --db PZ_MTEST             --nfscfg :disttype=[L2_sift,normdist,lnbnn],top_percent=[None,.5] -a timectrl             -p default:K=[1],normalizer_rule=name,sv_on=[True,False]             --save featscore{db}.png --figsize=13,10 --diskshow

ibeis --tf compare_featscores --nfscfg :disttype=[L2_sift,normdist,lnbnn]             -a timectrl -p default:K=1,normalizer_rule=name --db PZ_Master1             --save featscore{db}.png  --figsize=13,13 --diskshow

ibeis --tf compare_featscores --nfscfg :disttype=[L2_sift,normdist,lnbnn]             -a timectrl -p default:K=1,normalizer_rule=name --db GZ_ALL             --save featscore{db}.png  --figsize=13,13 --diskshow

ibeis --tf compare_featscores  --db GIRM_Master1             --nfscfg ':disttype=fg,L2_sift,normdist,lnbnn'             -a timectrl -p default:K=1,normalizer_rule=name             --save featscore{db}.png  --figsize=13,13

ibeis --tf compare_featscores --nfscfg :disttype=[L2_sift,normdist,lnbnn]             -a timectrl -p default:K=[1,2,3],normalizer_rule=name,sv_on=False             --db PZ_Master1 --save featscore{db}.png                  --dpi=128 --figsize=15,20 --diskshow

ibeis --tf compare_featscores --show --nfscfg :disttype=[L2_sift,normdist] -a timectrl -p :K=1 --db PZ_MTEST
ibeis --tf compare_featscores --show --nfscfg :disttype=[L2_sift,normdist] -a timectrl -p :K=1 --db GZ_ALL
ibeis --tf compare_featscores --show --nfscfg :disttype=[L2_sift,normdist] -a timectrl -p :K=1 --db PZ_Master1
ibeis --tf compare_featscores --show --nfscfg :disttype=[L2_sift,normdist] -a timectrl -p :K=1 --db GIRM_Master1

ibeis --tf compare_featscores  --db PZ_MTEST             --nfscfg :disttype=[L2_sift,normdist,lnbnn],top_percent=[None,.5,.2] -a timectrl             -p default:K=[1],normalizer_rule=name             --save featscore{db}.png --figsize=13,20 --diskshow

ibeis --tf compare_featscores  --db PZ_MTEST             --nfscfg :disttype=[L2_sift,normdist,lnbnn],top_percent=[None,.5,.2] -a timectrl             -p default:K=[1],normalizer_rule=name             --save featscore{db}.png --figsize=13,20 --diskshow

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.scorenorm import *  # NOQA
>>> result = compare_featscores()
>>> print(result)
>>> ut.quit_if_noshow()
>>> import plottool as pt
>>> ut.show_if_requested()
ibeis.algo.hots.scorenorm.get_topannot_training_idxs(cm, num=2)[source]

top annots version

Parameters:
  • cm (ibeis.ChipMatch) – object of feature correspondences and scores
  • num (int) – number of top annots per TP/TN (default = 2)

CommandLine:

python -m ibeis.algo.hots.scorenorm --exec-get_topannot_training_idxs --show

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.scorenorm import *  # NOQA
>>> import ibeis
>>> cm, qreq_ = ibeis.testdata_cm(defaultdb='PZ_MTEST')
>>> num = 2
>>> cm.score_csum(qreq_)
>>> (tp_idxs, tn_idxs) = get_topannot_training_idxs(cm, num)
>>> result = ('(tp_idxs, tn_idxs) = %s' % (ut.repr2((tp_idxs, tn_idxs), nl=1),))
>>> print(result)
(tp_idxs, tn_idxs) = (
    np.array([0, 1], dtype=np.int64),
    np.array([3, 4], dtype=np.int64),
)
ibeis.algo.hots.scorenorm.get_topname_training_idxs(cm, num=5)[source]

gets the index of the annots in the top groundtrue name and the top groundfalse names.

Parameters:
  • cm (ibeis.ChipMatch) – object of feature correspondences and scores
  • num (int) – number of false names (default = 5)
Returns:

(tp_idxs, tn_idxs) cm.daid_list[tp_idxs] are all of the

annotations in the correct name.

cm.daid_list[tn_idxs] are all of the

annotations in the top num_false incorrect names.

Return type:

tuple

CommandLine:

python -m ibeis --tf get_topname_training_idxs --show

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.scorenorm import *  # NOQA
>>> import ibeis
>>> cm, qreq_ = ibeis.testdata_cm('PZ_MTEST', a='default:dindex=0:10,qindex=0:1', t='best')
>>> num = 1
>>> (tp_idxs, tn_idxs) = get_topname_training_idxs(cm, num)
>>> result = ('(tp_idxs, tn_idxs) = %s' % (ut.repr2((tp_idxs, tn_idxs), nl=1),))
>>> print(result)
(tp_idxs, tn_idxs) = (
    np.array([0, 1, 2], dtype=np.int64),
    [3, 4, 5, 6],
)
ibeis.algo.hots.scorenorm.get_training_annotscores(qreq_, cm_list)[source]

Returns the annotation scores between each query and the correct groundtruth annotations as well as the top scoring false annotations.

ibeis.algo.hots.scorenorm.get_training_desc_dist(cm, qreq_, fsv_col_lbls=[], namemode=True, top_percent=None, data_annots=None, query_annots=None, num=None)[source]

computes custom distances on prematched descriptors

SeeAlso:

python -m ibeis –tf learn_featscore_normalizer –show –disttype=ratio

python -m ibeis –tf learn_featscore_normalizer –show –disttype=normdist -a timectrl -t default:K=1 –db PZ_Master1 –save pzmaster_normdist.png python -m ibeis –tf learn_featscore_normalizer –show –disttype=normdist -a timectrl -t default:K=1 –db PZ_MTEST –save pzmtest_normdist.png python -m ibeis –tf learn_featscore_normalizer –show –disttype=normdist -a timectrl -t default:K=1 –db GZ_ALL

python -m ibeis –tf learn_featscore_normalizer –show –disttype=L2_sift -a timectrl -t default:K=1 –db PZ_MTEST python -m ibeis –tf learn_featscore_normalizer –show –disttype=L2_sift -a timectrl -t default:K=1 –db PZ_Master1

python -m ibeis –tf compare_featscores –show –disttype=L2_sift,normdist -a timectrl -t default:K=1 –db GZ_ALL

CommandLine:

python -m ibeis.algo.hots.scorenorm --exec-get_training_desc_dist
python -m ibeis.algo.hots.scorenorm --exec-get_training_desc_dist:1

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.scorenorm import *  # NOQA
>>> import ibeis
>>> cm, qreq_ = ibeis.testdata_cm(defaultdb='PZ_MTEST')
>>> fsv_col_lbls = ['ratio', 'lnbnn', 'L2_sift']
>>> namemode = False
>>> (tp_fsv, tn_fsv) = get_training_desc_dist(cm, qreq_, fsv_col_lbls,
>>>                                           namemode=namemode)
>>> result = ut.repr2((tp_fsv.T, tn_fsv.T), nl=1)
>>> print(result)
Example1:
>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.scorenorm import *  # NOQA
>>> import ibeis
>>> cm, qreq_ = ibeis.testdata_cm(defaultdb='PZ_MTEST')
>>> fsv_col_lbls = cm.fsv_col_lbls
>>> num = None
>>> namemode = False
>>> top_percent = None
>>> data_annots = None
>>> (tp_fsv1, tn_fsv1) = get_training_fsv(cm, namemode=namemode,
>>>                                       top_percent=top_percent)
>>> (tp_fsv, tn_fsv) = get_training_desc_dist(cm, qreq_, fsv_col_lbls,
>>>                                           namemode=namemode,
>>>                                           top_percent=top_percent)
>>> vt.asserteq(tp_fsv1, tp_fsv)
>>> vt.asserteq(tn_fsv1, tn_fsv)
ibeis.algo.hots.scorenorm.get_training_featscores(qreq_, cm_list, disttype=None, namemode=True, fsvx=slice(None, None, None), threshx=None, thresh=0.9, num=None, top_percent=None)[source]

Returns the flattened set of feature scores between each query and the correct groundtruth annotations as well as the top scoring false annotations.

Parameters:
  • qreq (ibeis.QueryRequest) – query request object with hyper-parameters
  • cm_list (list) –
  • disttype (None) – (default = None)
  • namemode (bool) – (default = True)
  • fsvx (slice) – (default = slice(None, None, None))
  • threshx (None) – (default = None)
  • thresh (float) – only used if threshx is specified (default = 0.9)
SeeAlso:
TestResult.draw_feat_scoresep
Returns:(tp_scores, tn_scores, scorecfg)
Return type:tuple

CommandLine:

python -m ibeis.algo.hots.scorenorm --exec-get_training_featscores

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.scorenorm import *  # NOQA
>>> import ibeis
>>> cm_list, qreq_ = ibeis.testdata_cmlist(defaultdb='PZ_MTEST', a=['default:qsize=10'])
>>> disttype = None
>>> namemode = True
>>> fsvx = None
>>> threshx = 1
>>> thresh = 0.5
>>> (tp_scores, tn_scores, scorecfg) = get_training_featscores(
>>>     qreq_, cm_list, disttype, namemode, fsvx, threshx, thresh)
>>> result = scorecfg
>>> print(result)
(lnbnn*fg)[fg > 0.5]

lnbnn*fg[fg > 0.5]

ibeis.algo.hots.scorenorm.get_training_fsv(cm, namemode=True, num=None, top_percent=None)[source]

CommandLine:

python -m ibeis.algo.hots.scorenorm --exec-get_training_fsv --show

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.scorenorm import *  # NOQA
>>> import ibeis
>>> num = None
>>> cm, qreq_ = ibeis.testdata_cm('PZ_MTEST', a='default:dindex=0:10,qindex=0:1', t='best')
>>> (tp_fsv, tn_fsv) = get_training_fsv(cm, namemode=False)
>>> result = ('(tp_fsv, tn_fsv) = %s' % (ut.repr2((tp_fsv, tn_fsv), nl=1),))
>>> print(result)
ibeis.algo.hots.scorenorm.learn_annotscore_normalizer(qreq_, learnkw={})[source]

Takes the result of queries and trains a score encoder

Parameters:qreq (ibeis.QueryRequest) – query request object with hyper-parameters
Returns:encoder
Return type:vtool.ScoreNormalizer

CommandLine:

python -m ibeis --tf learn_annotscore_normalizer --show

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.scorenorm import *  # NOQA
>>> import ibeis
>>> qreq_ = ibeis.testdata_qreq_(
>>>     defaultdb='PZ_MTEST', a=['default'], p=['default'])
>>> encoder = learn_annotscore_normalizer(qreq_)
>>> ut.quit_if_noshow()
>>> encoder.visualize(figtitle=encoder.get_cfgstr())
>>> ut.show_if_requested()
ibeis.algo.hots.scorenorm.learn_featscore_normalizer(qreq_, datakw={}, learnkw={})[source]

Takes the result of queries and trains a score encoder

Parameters:qreq (ibeis.QueryRequest) – query request object with hyper-parameters
Returns:encoder
Return type:vtool.ScoreNormalizer

CommandLine:

python -m ibeis --tf learn_featscore_normalizer --show -t default:
python -m ibeis --tf learn_featscore_normalizer --show --fsvx=0 --threshx=1 --show
python -m ibeis --tf learn_featscore_normalizer --show -a default:size=40 -t default:fg_on=False,lnbnn_on=False,ratio_thresh=1.0,K=1,Knorm=6,sv_on=False,normalizer_rule=name --fsvx=0 --threshx=1 --show

python -m ibeis --tf learn_featscore_normalizer --show --disttype=ratio
python -m ibeis --tf learn_featscore_normalizer --show --disttype=lnbnn
python -m ibeis --tf learn_featscore_normalizer --show --disttype=L2_sift -t default:K=1

python -m ibeis --tf learn_featscore_normalizer --show --disttype=L2_sift -a timectrl -t default:K=1 --db PZ_Master1
python -m ibeis --tf learn_featscore_normalizer --show --disttype=ratio -a timectrl -t default:K=1 --db PZ_Master1
python -m ibeis --tf learn_featscore_normalizer --show --disttype=lnbnn -a timectrl -t default:K=1 --db PZ_Master1

# LOOK AT THIS
python -m ibeis --tf learn_featscore_normalizer --show --disttype=normdist -a timectrl -t default:K=1 --db PZ_Master1
#python -m ibeis --tf learn_featscore_normalizer --show --disttype=parzen -a timectrl -t default:K=1 --db PZ_Master1
#python -m ibeis --tf learn_featscore_normalizer --show --disttype=norm_parzen -a timectrl -t default:K=1 --db PZ_Master1

python -m ibeis --tf learn_featscore_normalizer --show --disttype=lnbnn --db PZ_Master1 -a timectrl -t best

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.scorenorm import *  # NOQA
>>> import ibeis
>>> learnkw = {}
>>> datakw = NormFeatScoreConfig.from_argv_dict()
>>> qreq_ = ibeis.testdata_qreq_(
>>>     defaultdb='PZ_MTEST', a=['default'], p=['default'])
>>> encoder = learn_featscore_normalizer(qreq_, datakw, learnkw)
>>> ut.quit_if_noshow()
>>> encoder.visualize(figtitle=encoder.get_cfgstr())
>>> ut.show_if_requested()
ibeis.algo.hots.scorenorm.load_featscore_normalizer(normer_cfgstr)[source]
Parameters:normer_cfgstr

CommandLine:

python -m ibeis.algo.hots.scorenorm --exec-load_featscore_normalizer --show
python -m ibeis.algo.hots.scorenorm --exec-load_featscore_normalizer --show --cfgstr=featscore
python -m ibeis.algo.hots.scorenorm --exec-load_featscore_normalizer --show --cfgstr=lovb

Example

>>> # SCRIPT
>>> from ibeis.algo.hots.scorenorm import *  # NOQA
>>> normer_cfgstr = ut.get_argval('--cfgstr', default='featscore')
>>> encoder = load_featscore_normalizer(normer_cfgstr)
>>> encoder.visualize(figtitle=encoder.get_cfgstr())
>>> ut.show_if_requested()
ibeis.algo.hots.scorenorm.train_featscore_normalizer()[source]

CommandLine:

python -m ibeis --tf train_featscore_normalizer --show

# Write Encoder
python -m ibeis --tf train_featscore_normalizer --db PZ_MTEST -t best -a default --fsvx=0 --threshx=1 --show

# Visualize encoder score adjustment
python -m ibeis --tf TestResult.draw_feat_scoresep --db PZ_MTEST -a timectrl -t best:lnbnn_normer=lnbnn_fg_featscore --show --nocache --nocache-hs

# Compare ranking with encoder vs without
python -m ibeis --tf draw_rank_cdf --db PZ_MTEST -a timectrl -t best:lnbnn_normer=[None,wulu] --show
python -m ibeis --tf draw_rank_cdf --db PZ_MTEST -a default  -t best:lnbnn_normer=[None,wulu] --show

# Compare in ipynb
python -m ibeis --tf autogen_ipynb --ipynb --db PZ_MTEST -a default -t best:lnbnn_normer=[None,lnbnn_fg_0.9__featscore]

# Big Test
python -m ibeis --tf draw_rank_cdf --db PZ_Master1 -a timectrl -t best:lnbnn_normer=[None,lovb],lnbnn_norm_thresh=.5 --show
python -m ibeis --tf draw_rank_cdf --db PZ_Master1 -a timectrl -t best:lnbnn_normer=[None,jypz],lnbnn_norm_thresh=.1 --show
python -m ibeis --tf draw_rank_cdf --db PZ_Master1 -a timectrl -t best:lnbnn_normer=[None,jypz],lnbnn_norm_thresh=0 --show


# Big Train
python -m ibeis --tf learn_featscore_normalizer --db PZ_Master1 -a timectrl -t best:K=1 --fsvx=0 --threshx=1 --show
python -m ibeis --tf train_featscore_normalizer --db PZ_Master1 -a timectrl:has_none=photobomb -t best:K=1 --fsvx=0 --threshx=1 --show --ainfo
python -m ibeis --tf train_featscore_normalizer --db PZ_Master1 -a timectrl:has_none=photobomb -t best:K=1 --fsvx=0 --threshx=1 --show
python -m ibeis --tf train_featscore_normalizer --db PZ_Master1 -a timectrl:has_none=photobomb -t best:K=3 --fsvx=0 --threshx=1 --show

Example

>>> # SCRIPT
>>> from ibeis.algo.hots.scorenorm import *  # NOQA
>>> encoder = train_featscore_normalizer()
>>> encoder.visualize(figtitle=encoder.get_cfgstr())
>>> ut.show_if_requested()

ibeis.algo.hots.scoring module

TODO: optional symetric and asymmetric search

ibeis.algo.hots.scoring.compute_annot_coverage_score(qreq_, cm, config={})[source]

CommandLine:

python -m ibeis.algo.hots.scoring --test-compute_annot_coverage_score:0
Example0:
>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.scoring import *  # NOQA
>>> qreq_, cm = plh.testdata_scoring()
>>> config = qreq_.qparams
>>> daid_list, score_list = compute_annot_coverage_score(qreq_, cm, config)
>>> ut.assert_inbounds(np.array(score_list), 0, 1, eq=True)
>>> result = ut.list_str(score_list, precision=3)
>>> print(result)
ibeis.algo.hots.scoring.compute_csum_score(cm, qreq_=None)[source]

CommandLine:

python -m ibeis.algo.hots.scoring --test-compute_csum_score

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.scoring import *  # NOQA
>>> ibs, qreq_, cm_list = plh.testdata_pre_sver('testdb1', qaid_list=[1])
>>> cm = cm_list[0]
>>> cm.evaluate_dnids(qreq_.ibs)
>>> cm.qnid = 1   # Hack for testdb1 names
>>> gt_flags = cm.get_groundtruth_flags()
>>> annot_score_list = compute_csum_score(cm)
>>> assert annot_score_list[gt_flags].max() > annot_score_list[~gt_flags].max()
>>> assert annot_score_list[gt_flags].max() > 10.0
ibeis.algo.hots.scoring.compute_general_matching_coverage_mask(make_mask_func, chipsize, fm, fs, qkpts, qweights, cov_cfg, out=None)[source]
ibeis.algo.hots.scoring.compute_name_coverage_score(qreq_, cm, config={})[source]

CommandLine:

python -m ibeis.algo.hots.scoring --test-compute_name_coverage_score:0
Example0:
>>> # SLOW_DOCTEST
>>> # (IMPORTANT)
>>> from ibeis.algo.hots.scoring import *  # NOQA
>>> qreq_, cm = plh.testdata_scoring()
>>> cm.evaluate_dnids(qreq_.ibs)
>>> config = qreq_.qparams
>>> dnid_list, score_list = compute_name_coverage_score(qreq_, cm, config)
>>> ut.assert_inbounds(np.array(score_list), 0, 1, eq=True)
>>> result = ut.list_str(score_list, precision=3)
>>> print(result)
ibeis.algo.hots.scoring.evaluate_masks_iter(masks_iter)[source]

save evaluation of a masks iter

ibeis.algo.hots.scoring.general_annot_coverage_mask_generator(make_mask_func, qreq_, cm, config, cov_cfg)[source]
Yeilds:
daid, weight_mask_m, weight_mask

CommandLine:

python -m ibeis.algo.hots.scoring --test-general_annot_coverage_mask_generator --show
python -m ibeis.algo.hots.scoring --test-general_annot_coverage_mask_generator --show --qaid 18

Note

Evaluate output one at a time or it will get clobbered

Example0:
>>> # SLOW_DOCTEST
>>> # (IMPORTANT)
>>> from ibeis.algo.hots.scoring import *  # NOQA
>>> qreq_, cm = plh.testdata_scoring('PZ_MTEST', qaid_list=[18])
>>> config = qreq_.qparams
>>> make_mask_func, cov_cfg = get_mask_func(config)
>>> masks_iter = general_annot_coverage_mask_generator(make_mask_func, qreq_, cm, config, cov_cfg)
>>> daid_list, score_list, masks_list = evaluate_masks_iter(masks_iter)
>>> #assert daid_list[idx] ==
>>> ut.quit_if_noshow()
>>> idx = score_list.argmax()
>>> daids = [daid_list[idx]]
>>> daid, weight_mask_m, weight_mask = masks_list[idx]
>>> show_single_coverage_mask(qreq_, cm, weight_mask_m, weight_mask, daids)
>>> ut.show_if_requested()
ibeis.algo.hots.scoring.general_coverage_mask_generator(make_mask_func, qreq_, qaid, id_list, fm_list, fs_list, config, cov_cfg)[source]

agnostic to whether or not the id/fm/fs lists are name or annotation groups

ibeis.algo.hots.scoring.general_name_coverage_mask_generator(make_mask_func, qreq_, cm, config, cov_cfg)[source]
Yeilds:
nid, weight_mask_m, weight_mask

CommandLine:

python -m ibeis.algo.hots.scoring --test-general_name_coverage_mask_generator --show
python -m ibeis.algo.hots.scoring --test-general_name_coverage_mask_generator --show --qaid 18

Note

Evaluate output one at a time or it will get clobbered

Example0:
>>> # SLOW_DOCTEST
>>> # (IMPORTANT)
>>> from ibeis.algo.hots.scoring import *  # NOQA
>>> qreq_, cm = plh.testdata_scoring('PZ_MTEST', qaid_list=[18])
>>> config = qreq_.qparams
>>> make_mask_func, cov_cfg = get_mask_func(config)
>>> masks_iter = general_name_coverage_mask_generator(make_mask_func, qreq_, cm, config, cov_cfg)
>>> dnid_list, score_list, masks_list = evaluate_masks_iter(masks_iter)
>>> ut.quit_if_noshow()
>>> nidx = np.where(dnid_list == cm.qnid)[0][0]
>>> daids = cm.get_groundtruth_daids()
>>> dnid, weight_mask_m, weight_mask = masks_list[nidx]
>>> show_single_coverage_mask(qreq_, cm, weight_mask_m, weight_mask, daids)
>>> ut.show_if_requested()
ibeis.algo.hots.scoring.get_annot_kpts_baseline_weights(ibs, aid_list, config2_=None, config={})[source]

Returns weights based on distinctiveness and/or features score / or ones. Customized based on config.

Parameters:
  • qreq (QueryRequest) – query request object with hyper-parameters
  • aid_list (int) – list of annotation ids
  • config (dict) –
Returns:

weights_list

Return type:

list

CommandLine:

python -m ibeis.algo.hots.scoring --test-get_annot_kpts_baseline_weights

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.scoring import *  # NOQA
>>> qreq_, cm = plh.testdata_scoring('testdb1')
>>> aid_list = cm.daid_list
>>> config = qreq_.qparams
>>> # execute function
>>> config2_ = qreq_.qparams
>>> kpts_list = qreq_.ibs.get_annot_kpts(aid_list, config2_=config2_)
>>> weights_list = get_annot_kpts_baseline_weights(qreq_.ibs, aid_list, config2_, config)
>>> # verify results
>>> depth1 = ut.get_list_column(ut.depth_profile(kpts_list), 0)
>>> depth2 = ut.depth_profile(weights_list)
>>> assert depth1 == depth2
>>> print(depth1)
>>> result = str(depth2)
>>> print(result)
ibeis.algo.hots.scoring.get_kpts_distinctiveness(ibs, aid_list, config2_=None, config={})[source]

per-species disinctivness wrapper around ibeis cached function

ibeis.algo.hots.scoring.get_mask_func(config)[source]
ibeis.algo.hots.scoring.get_masks(qreq_, cm, config={})[source]

testing function

CommandLine:

# SHOW THE BASELINE AND MATCHING MASKS
python -m ibeis.algo.hots.scoring --test-get_masks
python -m ibeis.algo.hots.scoring --test-get_masks \
    --maskscore_mode=kpts --show --prior_coeff=.5 --unconstrained_coeff=.3 --constrained_coeff=.2
python -m ibeis.algo.hots.scoring --test-get_masks \
    --maskscore_mode=grid --show --prior_coeff=.5 --unconstrained_coeff=0 --constrained_coeff=.5
python -m ibeis.algo.hots.scoring --test-get_masks --qaid 4\
    --maskscore_mode=grid --show --prior_coeff=.5 --unconstrained_coeff=0 --constrained_coeff=.5
python -m ibeis.algo.hots.scoring --test-get_masks --qaid 86\
    --maskscore_mode=grid --show --prior_coeff=.5 --unconstrained_coeff=0 --constrained_coeff=.5 --grid_scale_factor=.5

python -m ibeis.algo.hots.scoring --test-get_masks --show --db PZ_MTEST --qaid 18
python -m ibeis.algo.hots.scoring --test-get_masks --show --db PZ_MTEST --qaid 1

Example

>>> # SLOW_DOCTEST
>>> # (IMPORTANT)
>>> from ibeis.algo.hots.scoring import *  # NOQA
>>> import ibeis
>>> # build test data
>>> qreq_, cm = plh.testdata_scoring('PZ_MTEST', qaid_list=[18])
>>> config = qreq_.qparams
>>> # execute function
>>> id_list, score_list, masks_list = get_masks(qreq_, cm, config)
>>> ut.quit_if_noshow()
>>> import plottool as pt
>>> show_coverage_mask(qreq_, cm, masks_list, index=score_list.argmax())
>>> pt.show_if_requested()
ibeis.algo.hots.scoring.get_name_shortlist_aids(daid_list, dnid_list, annot_score_list, name_score_list, nid2_nidx, nNameShortList, nAnnotPerName)[source]

CommandLine:

python -m ibeis.algo.hots.scoring --test-get_name_shortlist_aids

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.scoring import *  # NOQA
>>> # build test data
>>> daid_list        = np.array([11, 12, 13, 14, 15, 16, 17])
>>> dnid_list        = np.array([21, 21, 21, 22, 22, 23, 24])
>>> annot_score_list = np.array([ 6,  2,  3,  5,  6,  3,  2])
>>> name_score_list  = np.array([ 8,  9,  5,  4])
>>> nid2_nidx        = {21:0, 22:1, 23:2, 24:3}
>>> nNameShortList, nAnnotPerName = 3, 2
>>> # execute function
>>> args = (daid_list, dnid_list, annot_score_list, name_score_list,
...         nid2_nidx, nNameShortList, nAnnotPerName)
>>> top_daids = get_name_shortlist_aids(*args)
>>> # verify results
>>> result = str(top_daids)
>>> print(result)
[15, 14, 11, 13, 16]
ibeis.algo.hots.scoring.make_chipmatch_shortlists(qreq_, cm_list, nNameShortList, nAnnotPerName, score_method=u'nsum')[source]

Makes shortlists for reranking

CommandLine:

python -m ibeis.algo.hots.scoring --test-make_chipmatch_shortlists --show

Example

>>> # ENABLE_DOCTEST
>>> from ibeis.algo.hots.scoring import *  # NOQA
>>> ibs, qreq_, cm_list = plh.testdata_pre_sver('PZ_MTEST', qaid_list=[18])
>>> score_method    = 'nsum'
>>> nNameShortList  = 5
>>> nAnnotPerName   = 6
>>> # apply scores
>>> score_chipmatch_list(qreq_, cm_list, score_method)
>>> cm_input = cm_list[0]
>>> #assert cm_input.dnid_list.take(cm_input.argsort())[0] == cm_input.qnid
>>> # execute function
>>> cm_shortlist = make_chipmatch_shortlists(qreq_, cm_list, nNameShortList, nAnnotPerName)
>>> cm_input.print_rawinfostr()
>>> cm = cm_shortlist[0]
>>> cm.print_rawinfostr()
>>> # should be sorted already from the shortlist take
>>> top_nid_list = cm.dnid_list
>>> top_aid_list = cm.daid_list
>>> qnid = cm.qnid
>>> print('top_aid_list = %r' % (top_aid_list,))
>>> print('top_nid_list = %r' % (top_nid_list,))
>>> print('qnid = %r' % (qnid,))
>>> rankx = top_nid_list.tolist().index(qnid)
>>> assert rankx == 0, 'qnid=%r should be first rank, not rankx=%r' % (qnid, rankx)
>>> max_num_rerank = nNameShortList * nAnnotPerName
>>> min_num_rerank = nNameShortList
>>> ut.assert_inbounds(len(top_nid_list), min_num_rerank, max_num_rerank, 'incorrect number in shortlist', eq=True)
>>> ut.quit_if_noshow()
>>> cm.show_single_annotmatch(qreq_, daid=top_aid_list[0])
>>> ut.show_if_requested()
ibeis.algo.hots.scoring.score_chipmatch_list(qreq_, cm_list, score_method, progkw=None)[source]

CommandLine:

python -m ibeis.algo.hots.scoring --test-score_chipmatch_list
python -m ibeis.algo.hots.scoring --test-score_chipmatch_list:1
python -m ibeis.algo.hots.scoring --test-score_chipmatch_list:0 --show
Example0:
>>> # SLOW_DOCTEST
>>> # (IMPORTANT)
>>> from ibeis.algo.hots.scoring import *  # NOQA
>>> ibs, qreq_, cm_list = plh.testdata_pre_sver()
>>> score_method = qreq_.qparams.prescore_method
>>> score_chipmatch_list(qreq_, cm_list, score_method)
>>> cm = cm_list[0]
>>> assert cm.score_list.argmax() == 0
>>> ut.quit_if_noshow()
>>> cm.show_single_annotmatch(qreq_)
>>> ut.show_if_requested()
Example1:
>>> # SLOW_DOCTEST
>>> # (IMPORTANT)
>>> from ibeis.algo.hots.scoring import *  # NOQA
>>> ibs, qreq_, cm_list = plh.testdata_post_sver()
>>> qaid = qreq_.get_external_qaids()[0]
>>> cm = cm_list[0]
>>> score_method = qreq_.qparams.score_method
>>> score_chipmatch_list(qreq_, cm_list, score_method)
>>> assert cm.score_list.argmax() == 0
>>> ut.quit_if_noshow()
>>> cm.show_single_annotmatch(qreq_)
>>> ut.show_if_requested()
ibeis.algo.hots.scoring.score_masks(masks_iter)[source]
ibeis.algo.hots.scoring.score_matching_mask(weight_mask_m, weight_mask)[source]
ibeis.algo.hots.scoring.show_annot_weights(qreq_, aid, config={})[source]

DEMO FUNC

CommandLine:

python -m ibeis.algo.hots.scoring --test-show_annot_weights --show --db GZ_ALL --aid 1 --maskscore_mode='grid'
python -m ibeis.algo.hots.scoring --test-show_annot_weights --show --db GZ_ALL --aid 1 --maskscore_mode='kpts'
python -m ibeis.algo.hots.scoring --test-show_annot_weights --show --db PZ_Master0 --aid 1
python -m ibeis.algo.hots.scoring --test-show_annot_weights --show --db PZ_MTEST --aid 1

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.scoring import *  # NOQA
>>> import plottool as pt
>>> import ibeis
>>> qreq_ = ibeis.testdata_qreq_()
>>> ibs = qreq_.ibs
>>> aid = qreq_.get_external_qaids()[0]
>>> config = qreq_.qparams
>>> show_annot_weights(qreq_, aid, config)
>>> pt.show_if_requested()
ibeis.algo.hots.scoring.show_coverage_mask(qreq_, cm, masks_list, index=0, fnum=None)[source]
ibeis.algo.hots.scoring.show_single_coverage_mask(qreq_, cm, weight_mask_m, weight_mask, daids, fnum=None)[source]
ibeis.algo.hots.scoring.sift_selectivity_score(vecs1_m, vecs2_m, cos_power=3.0, dtype=<type 'float'>)[source]

applies selectivity score from SMK paper Take componentwise dot produt and divide by 512**2 because of the sift descriptor uint8 trick

ibeis.algo.hots.special_query module

TODO: DEPRICATE

handles the “special” more complex vs-one re-ranked query

# Write some alias for ourselves python -c “import utool as ut; ut.write_modscript_alias( ‘Tinc.sh’, ‘ibeis.algo.hots.qt_inc_automatch’)” python -c “import utool as ut; ut.write_modscript_alias(‘pTinc.sh’, ‘ibeis.algo.hots.qt_inc_automatch’, ‘utprof.py’)”

# PROFILE PZ_Master0 With lots of preadded data sh pTinc.sh –test-test_inc_query:3 –num-init 7500 –test-title “ProfileIncPZMaster0”

sh pTinc.sh –test-test_inc_query:3 –num-init 8690 sh pTinc.sh –test-test_inc_query:0 sh pTinc.sh –test-test_inc_query:3 –num-init 5000 –devcache –vsone-errs

sh Tinc.sh –test-test_inc_query:2 –num-init 100 –devcache –vsone-errs

# Interactive GZ Test sh Tinc.sh –test-test_inc_query:2 –num-init 100 –devcache –no-normcache –vsone-errs –ia 10 –test-title “GZ_Inc_Errors”

# Automatic GZ Test sh Tinc.sh –test-test_inc_query:2 –num-init 100 –devcache –no-normcache –vsone-errs –test-title “GZ_Inc_Errors”

# AUTOMATIC PZ_MTEST sh Tinc.sh –test-test_inc_query:1 –num-init 0 –devcache –no-normcache –vsone-errs –test-title “PZ_Inc_Errors” # No testing sh Tinc.sh –test-test_inc_query:1 –num-init 0 –no-normcache –test-title “PZ_Inc_Errors”

# Automatic GZ Test Small sh Tinc.sh –test-test_inc_query:2 –num-init 0 –devcache –no-normcache –vsone-errs –test-title “GZ_DEV” –gzdev –ninit 34 –naac –interupt-case sh Tinc.sh –test-test_inc_query:2 –num-init 0 –devcache –no-normcache –vsone-errs –test-title “GZ_DEV” –gzdev –ninit 47 –naac –interupt-case

class ibeis.algo.hots.special_query.TestTup(qaid_t, qaid, vsmany_rank, vsone_rank)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__getstate__()

Exclude the OrderedDict from pickling

__repr__()

Return a nicely formatted representation string

qaid

Alias for field number 1

qaid_t

Alias for field number 0

vsmany_rank

Alias for field number 2

vsone_rank

Alias for field number 3

ibeis.algo.hots.special_query.apply_new_qres_filter_scores(qreq_vsone_, qres_vsone, newfsv_list, newscore_aids, filtkey)[source]

applies the new filter scores vectors to a query result and updates other scores

Parameters:
  • qres_vsone (QueryResult) – object of feature correspondences and scores
  • newfsv_list (list) –
  • newscore_aids
  • filtkey

CommandLine:

python -m ibeis.algo.hots.special_query --test-apply_new_qres_filter_scores

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.special_query import *  # NOQA
>>> ibs, valid_aids = testdata_special_query()
>>> qaids = valid_aids[0:1]
>>> daids = valid_aids[1:]
>>> qaid = qaids[0]
>>> filtkey = hstypes.FiltKeys.DISTINCTIVENESS
>>> use_cache = False
>>> qaid2_qres_vsmany, qreq_vsmany_ = query_vsmany_initial(ibs, qaids, daids, use_cache)
>>> vsone_query_pairs = build_vsone_shortlist(ibs, qaid2_qres_vsmany)
>>> qaid2_qres_vsone, qreq_vsone_ = query_vsone_pairs(ibs, vsone_query_pairs, use_cache)
>>> qreq_vsone_.load_score_normalizer()
>>> qres_vsone = qaid2_qres_vsone[qaid]
>>> qres_vsmany = qaid2_qres_vsmany[qaid]
>>> top_aids = vsone_query_pairs[0][1]
>>> newfsv_list, newscore_aids = get_new_qres_distinctiveness(qres_vsone, qres_vsmany, top_aids, filtkey)
>>> apply_new_qres_filter_scores(qreq_vsone_, qres_vsone, newfsv_list, newscore_aids, filtkey)
ibeis.algo.hots.special_query.augment_vsone_with_vsmany(vsone_query_pairs, qaid2_qres_vsone, qaid2_qres_vsmany, qreq_vsone_)[source]

AUGMENT VSONE QUERIES (BIG HACKS AFTER THIS POINT) Apply vsmany distinctiveness scores to vsone

Parameters:
  • vsone_query_pairs
  • qaid2_qres_vsone (dict) – dict of query result objects
  • qaid2_qres_vsmany (dict) – dict of query result objects
  • qreq_vsone

CommandLine:

python -m ibeis.algo.hots.special_query --test-augment_vsone_with_vsmany

Example

>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.special_query import *  # NOQA
>>> # build test data
>>> ibs, valid_aids = testdata_special_query()
>>> qaids = valid_aids[0:1]
>>> daids = valid_aids[1:]
>>> qaid = qaids[0]
>>> qaid2_qres_vsmany, qreq_vsmany_ = query_vsmany_initial(
...    ibs, qaids, daids, use_cache=False, save_qcache=False,
...    qreq_vsmany_=None)
>>> vsone_query_pairs = build_vsone_shortlist(ibs, qaid2_qres_vsmany)
>>> qaid2_qres_vsone, qreq_vsone_ = query_vsone_pairs(ibs, vsone_query_pairs, False)
>>> if qreq_vsone_.qparams.score_normalization:
>>>    qreq_vsone_.load_score_normalizer()
>>> # execute function
>>> result = augment_vsone_with_vsmany(vsone_query_pairs, qaid2_qres_vsone, qaid2_qres_vsmany, qreq_vsone_)
>>> # verify results
>>> cm = qaid2_qres_vsone[qaid]
>>> assert np.all(ut.inbounds(cm.aid2_fsv[daids[0]], 0.0, 1.0, eq=True))
>>> assert np.all(ut.inbounds(cm.aid2_score[daids[0]], 0.0, 1.0, eq=True))
>>> print(result)
ibeis.algo.hots.special_query.build_vsone_shortlist(ibs, qaid2_qres_vsmany)[source]

looks that the top N names in a vsmany query to apply vsone reranking

Parameters:
  • ibs (IBEISController) – ibeis controller object
  • qaid2_qres_vsmany (dict) – dict of query result objects
Returns:

vsone_query_pairs

Return type:

list

CommandLine:

python -m ibeis.algo.hots.special_query --test-build_vsone_shortlist

Example

>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.special_query import *  # NOQA
>>> ibs, valid_aids = testdata_special_query()
>>> qaids = valid_aids[0:1]
>>> daids = valid_aids[1:]
>>> qaid2_qres_vsmany, qreq_vsmany_ = query_vsmany_initial(ibs, qaids, daids)
>>> # execute function
>>> vsone_query_pairs = build_vsone_shortlist(ibs, qaid2_qres_vsmany)
>>> qaid, top_aid_list = vsone_query_pairs[0]
>>> top_nid_list = ibs.get_annot_name_rowids(top_aid_list)
>>> assert top_nid_list.index(1) == 0, 'name 1 should be rank 1'
>>> assert len(top_nid_list) == 5, 'should have 3 names and up to 2 image per name'

[(1, [3, 2, 6, 5, 4])] [(1, [2, 3, 6, 5, 4])]

ibeis.algo.hots.special_query.get_extern_distinctiveness(qreq_, cm, **kwargs)[source]

Uses distinctivness normalizer class (which uses predownloaded models) to normalize the distinctivness of a keypoint for query points.

IDEA:
because we have database points as well we can use the distance between normalizer of the query point and the normalizer of the database point. They should have a similar normalizer if they are a correct match AND nondistinctive.
Parameters:
  • qreq (QueryRequest) – query request object with hyper-parameters
  • cm (QueryResult) – object of feature correspondences and scores
Returns:

(new_fsv_list, daid_list)

Return type:

tuple

CommandLine:

python -m ibeis.algo.hots.special_query --test-get_extern_distinctiveness

Example

>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.special_query import *  # NOQA
>>> import ibeis
>>> # build test data
>>> ibs = ibeis.opendb('testdb1')
>>> daids = ibs.get_valid_aids(species=ibeis.const.TEST_SPECIES.ZEB_PLAIN)
>>> qaids = daids[0:1]
>>> cfgdict = dict(codename='vsone_unnorm_dist_ratio_extern_distinctiveness')
>>> qreq_ = ibs.new_query_request(qaids, daids, cfgdict=cfgdict)
>>> #qreq_.lazy_load()
>>> cm = ibs.query_chips(qreq_=qreq_, use_cache=False, save_qcache=False)[0]
>>> # execute function
>>> (new_fsv_list, daid_list) = get_extern_distinctiveness(qreq_, cm)
>>> # verify results
>>> assert all([fsv.shape[1] == 1 + len(cm.filtkey_list) for fsv in new_fsv_list])
>>> assert all([np.all(fsv.T[-1] >= 0) for fsv in new_fsv_list])
>>> assert all([np.all(fsv.T[-1] <= 1) for fsv in new_fsv_list])
ibeis.algo.hots.special_query.get_new_qres_distinctiveness(qres_vsone, qres_vsmany, top_aids, filtkey)[source]

gets the distinctiveness score from vsmany and applies it to vsone

CommandLine:

python -m ibeis.algo.hots.special_query --exec-get_new_qres_distinctiveness

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.special_query import *  # NOQA
>>> ibs, valid_aids = testdata_special_query()
>>> qaids = valid_aids[0:1]
>>> daids = valid_aids[1:]
>>> qaid = qaids[0]
>>> filtkey = hstypes.FiltKeys.DISTINCTIVENESS
>>> use_cache = False
>>> # execute function
>>> qaid2_qres_vsmany, qreq_vsmany_ = query_vsmany_initial(ibs, qaids, daids, use_cache)
>>> vsone_query_pairs = build_vsone_shortlist(ibs, qaid2_qres_vsmany)
>>> qaid2_qres_vsone, qreq_vsone_ = query_vsone_pairs(ibs, vsone_query_pairs, use_cache)
>>> qreq_vsone_.load_score_normalizer()
>>> qres_vsone = qaid2_qres_vsone[qaid]
>>> qres_vsmany = qaid2_qres_vsmany[qaid]
>>> top_aids = vsone_query_pairs[0][1]
>>> # verify results
>>> newfsv_list, newscore_aids = get_new_qres_distinctiveness(qres_vsone, qres_vsmany, top_aids, filtkey)
ibeis.algo.hots.special_query.new_feature_score_dimension(cm, daid)[source]

returns new fsv vectors but does not apply them

ibeis.algo.hots.special_query.product_scoring(new_fsv_vsone)[source]

product of all weights

ibeis.algo.hots.special_query.query_vsmany_initial(ibs, qaids, daids, use_cache=False, qreq_vsmany_=None, save_qcache=False)[source]
Parameters:
  • ibs (IBEISController) – ibeis controller object
  • qaids (list) – query annotation ids
  • daids (list) – database annotation ids
  • use_cache (bool) – turns on disk based caching
  • qreq_vsmany (QueryRequest) – persistant vsmany query request
Returns:

(newfsv_list, newscore_aids)

Return type:

tuple

CommandLine:

python -m ibeis.algo.hots.special_query --test-query_vsmany_initial

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.special_query import *  # NOQA
>>> ibs, valid_aids = testdata_special_query()
>>> qaids = valid_aids[0:1]
>>> daids = valid_aids[1:]
>>> use_cache = False
>>> # execute function
>>> qaid2_qres_vsmany, qreq_vsmany_ = query_vsmany_initial(ibs, qaids, daids, use_cache)
>>> qres_vsmany = qaid2_qres_vsmany[qaids[0]]
>>> # verify results
>>> result = qres_vsmany.get_top_aids().tolist()
>>> print(result)
[2, 6, 4]
ibeis.algo.hots.special_query.query_vsone_pairs(ibs, vsone_query_pairs, use_cache=False, save_qcache=False)[source]

does vsone queries to rerank the top few vsmany querys

Returns:qaid2_qres_vsone, qreq_vsone_
Return type:tuple

CommandLine:

python -m ibeis.algo.hots.special_query --test-query_vsone_pairs

Example

>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.special_query import *  # NOQA
>>> ibs, valid_aids = testdata_special_query()
>>> qaids = valid_aids[0:1]
>>> daids = valid_aids[1:]
>>> qaid = qaids[0]
>>> filtkey = hstypes.FiltKeys.DISTINCTIVENESS
>>> use_cache = False
>>> save_qcache = False
>>> # execute function
>>> qaid2_qres_vsmany, qreq_vsmany_ = query_vsmany_initial(ibs, qaids, daids)
>>> vsone_query_pairs = build_vsone_shortlist(ibs, qaid2_qres_vsmany)
>>> qaid2_qres_vsone, qreq_vsone_ = query_vsone_pairs(ibs, vsone_query_pairs)
>>> qres_vsone = qaid2_qres_vsone[qaid]
>>> top_namescore_aids = qres_vsone.get_top_aids().tolist()
>>> result = str(top_namescore_aids)
>>> top_namescore_names = ibs.get_annot_names(top_namescore_aids)
>>> assert top_namescore_names[0] == 'easy', 'top_namescore_names[0]=%r' % (top_namescore_names[0],)
ibeis.algo.hots.special_query.query_vsone_verified(ibs, qaids, daids, qreq_vsmany__=None, incinfo=None)[source]

main special query entry point

A hacked in vsone-reranked pipeline Actually just two calls to the pipeline

Parameters:
  • ibs (IBEISController) – ibeis controller object
  • qaids (list) – query annotation ids
  • daids (list) – database annotation ids
  • qreq_vsmany (QueryRequest) – used for persitant QueryRequest objects if None creates new query request otherwise
Returns:

qaid2_qres, qreq_

Return type:

tuple

CommandLine:

python -m ibeis.algo.hots.special_query --test-query_vsone_verified

Example

>>> # SLOW_DOCTEST
>>> from ibeis.algo.hots.special_query import *  # NOQA
>>> ibs, valid_aids = testdata_special_query('PZ_MTEST')
>>> qaids = valid_aids[0:1]
>>> daids = valid_aids[1:]
>>> qaid = qaids[0]
>>> # execute function
>>> qaid2_qres, qreq_, qreq_vsmany_ = query_vsone_verified(ibs, qaids, daids)
>>> cm = qaid2_qres[qaid]
ibeis.algo.hots.special_query.test_vsone_errors(ibs, daids, qaid2_qres_vsmany, qaid2_qres_vsone, incinfo)[source]

ibs1 = ibs_gt ibs2 = ibs (the current test database, sorry for the backwardness) aid1_to_aid2 - maps annots from ibs1 to ibs2

ibeis.algo.hots.special_query.test_vsone_verified(ibs)[source]

hack in vsone-reranking

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.all_imports import *  # NOQA
>>> #reload_all()
>>> from ibeis.algo.hots.automated_matcher import *  # NOQA
>>> import ibeis
>>> ibs = ibeis.opendb('PZ_MTEST')
>>> test_vsone_verified(ibs)
ibeis.algo.hots.special_query.testdata_special_query(dbname=None)[source]

test data for special query doctests

ibeis.algo.hots.special_query.verbose_report_results(ibs, qaids, qaid2_qres_vsone, qaid2_qres_vsmany)[source]

ibeis.algo.hots.testem module

TODO: move to ibeis/scripts

ibeis.algo.hots.testem.draw_em_graph(P, Pn, PL, gam, num_labels)[source]

python -m ibeis.algo.hots.testem test_em –show –no-cnn

ibeis.algo.hots.testem.make_test_pairwise_fetaures(case1, case2, label, rng)[source]
ibeis.algo.hots.testem.make_test_pairwise_labels(case1, case2)[source]
ibeis.algo.hots.testem.make_test_pairwise_labels2(cases1, cases2)[source]
ibeis.algo.hots.testem.random_case_set()[source]
Returns:(labels, pairwise_feats)
Return type:tuple

CommandLine:

python -m ibeis.algo.hots.testem random_case_set --show

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.testem import *  # NOQA
>>> (labels, pairwise_feats) = random_case_set()
>>> result = ('(labels, pairwise_feats) = %s' % (ut.repr2((labels, pairwise_feats)),))
>>> print(result)
ibeis.algo.hots.testem.random_test_annot(num_names=5, rng=<module 'numpy.random' from '/usr/local/lib/python2.7/dist-packages/numpy/random/__init__.pyc'>)[source]

Create a single test annotation with random properties

Parameters:
  • num_names (int) – (default = 5)
  • rng (module) – random number generator (default = numpy.random)

CommandLine:

python -m ibeis.algo.hots.testem random_test_annot --show

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.testem import *  # NOQA
>>> num_names = 5
>>> rng = np.random.RandomState(0)
>>> result = random_test_annot(num_names, rng)
>>> print(result)
{u'qual': 1, u'yaw': 0.0, u'nfeats': 1529, u'name': 0, u'view': u'R'}
ibeis.algo.hots.testem.test_em()[source]

CommandLine:

python -m ibeis.algo.hots.testem test_em --show
python -m ibeis.algo.hots.testem test_em --show --no-cnn

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.testem import *  # NOQA
>>> P, Pn, PL, gam, num_labels = test_em()
>>> ut.quit_if_noshow()
>>> import plottool as pt
>>> pt.qt4ensure()
>>> draw_em_graph(P, Pn, PL, gam, num_labels)
>>> ut.show_if_requested()
ibeis.algo.hots.testem.test_em2(prob_names, prob_annots=None)[source]

assert prob_names.shape == (nAnnots, nNames)

ibeis.algo.hots.testem.test_rf_classifier()[source]

ibeis.algo.hots.tmp_cluster module

ibeis.algo.hots.tmp_cluster.flow()[source]

http://pmneila.github.io/PyMaxflow/maxflow.html#maxflow-fastmin

pip install PyMaxFlow pip install pystruct pip install hdbscan

ibeis.algo.hots.user_dialogs module

Depricate

ibeis.algo.hots.user_dialogs.convert_name_suggestion_to_aids(ibs, choicetup, name_suggest_tup)[source]

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.user_dialogs import *  # NOQA
>>> import ibeis
>>> # build test data
>>> ibs = ibeis.opendb('testdb1')
>>> comp_aids = [2, 3, 4]
>>> comp_names = ['fred', 'sue', 'alice']
>>> chosen_names = ['fred']
>>> # execute function
>>> result = convert_name_suggestion_to_aids(ibs, choicetup, name_suggest_tup)
>>> # verify results
>>> print(result)
ibeis.algo.hots.user_dialogs.wait_for_user_exemplar_decision(autoexemplar_msg, exemplar_decision, exemplar_condience, incinfo=None)[source]

hooks into to some method of getting user input for exemplars

TODO: really good interface

Parameters:
  • autoexemplar_msg
  • exemplar_decision
  • exemplar_condience
Returns:

True

Return type:

?

CommandLine:

python -m ibeis.algo.hots.automated_matcher --test-get_user_exemplar_decision

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.automated_matcher import *  # NOQA
>>> import ibeis  # NOQA
>>> # build test data
>>> autoexemplar_msg = '?'
>>> exemplar_decision = '?'
>>> exemplar_condience = '?'
>>> get_user_exemplar_decision(autoexemplar_msg, exemplar_decision,
>>>                            exemplar_condience)
>>> # verify results
>>> result = str(True)
>>> print(result)
ibeis.algo.hots.user_dialogs.wait_for_user_name_decision(ibs, cm, qreq_, choicetup, name_suggest_tup, incinfo=None)[source]

Prompts the user for input hooks into to some method of getting user input for names

Parameters:
  • ibs (IBEISController) –
  • cm (QueryResult) – object of feature correspondences and scores
  • autoname_func (function) –

CommandLine:

python -m ibeis.algo.hots.user_dialogs --test-wait_for_user_name_decision --show

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.user_dialogs import *  # NOQA
>>> import ibeis
>>> # build test data
>>> ibs = ibeis.opendb('testdb1')
>>> qaids = [1]
>>> daids = [2, 3, 4, 5]
>>> cm, qreq_ = ibs.query_chips(qaids, daids, cfgdict=dict(),
>>>                             return_request=True)[0]
>>> choicetup = '?'
>>> name_suggest_tup = '?'
>>> incinfo = None
>>> # execute function
>>> result = wait_for_user_name_decision(ibs, cm, qreq_, choicetup,
>>>                                      name_suggest_tup, incinfo)
>>> # verify results
>>> print(result)
>>> ut.show_if_requested()

ibeis.algo.hots.vsone_pipeline module

special pipeline for vsone specific functions

Current Issues:
  • getting feature distinctiveness is too slow, we can either try a different model, or precompute feature distinctiveness.
    • we can reduce the size of the vsone shortlist
TODOLIST:
  • Unconstrained is a terrible name. It is constrianed by the ratio
  • Precompute distinctivness

#* keep feature matches from vsmany (allow fm_B) #* Each keypoint gets # - foregroundness # - global distinctivness (databasewide) LNBNN # - local distinctivness (imagewide) RATIO # - regional match quality (descriptor based) COS * Asymetric weight scoring

  • FIX BUGS IN score_chipmatch_nsum FIRST THING TOMORROW.
dict keys / vals are being messed up. very inoccuous

Visualization to “prove” that vsone works

TestCases:
PZ_Master0 - aids 1801, 4792 - near-miss
TestFuncs:
>>> # VsMany Only
python -m ibeis.algo.hots.vsone_pipeline --test-show_post_vsmany_vser --show
>>> # VsOne Only
python -m ibeis.algo.hots.vsone_pipeline --test-vsone_reranking --show --no-vsmany_coeff
>>> # VsOne + VsMany
python -m ibeis.algo.hots.vsone_pipeline --test-vsone_reranking --show
>>> # Rerank Vsone Test Harness
python -c "import utool as ut; ut.write_modscript_alias('Tvs1RR.sh', 'dev.py', '--allgt  --db PZ_MTEST --index 1:40:2')"  # NOQA
sh Tvs1RR.sh -t custom:rrvsone_on=True custom custom:rrvsone_on=True
sh Tvs1RR.sh -t custom custom:rrvsone_on=True --print-scorediff-mat-stats
sh Tvs1RR.sh -t custom:rrvsone_on=True custom:rrvsone_on=True, --print-confusion-stats --print-scorediff-mat-stats

–print-scorediff-mat-stats –print-confusion-stats

ibeis.algo.hots.vsone_pipeline.compute_query_constrained_matches(qreq_, qaid, daid_list, H_list, config)[source]

CommandLine:

python -m ibeis.algo.hots.vsone_pipeline --test-compute_query_constrained_matches --show
python -m ibeis.algo.hots.vsone_pipeline --test-compute_query_constrained_matches --show --shownorm
python -m ibeis.algo.hots.vsone_pipeline --test-compute_query_constrained_matches --show --shownorm --homog
python -m ibeis.algo.hots.vsone_pipeline --test-compute_query_constrained_matches --show --homog
python -m ibeis.algo.hots.vsone_pipeline --test-compute_query_constrained_matches --show --homog --index 2
Example1:
>>> # SLOW_DOCTEST
>>> # (IMPORTANT)
>>> from ibeis.algo.hots.vsone_pipeline import *  # NOQA
>>> ibs, qreq_, prior_cm = plh.testdata_matching()
>>> config = qreq_.qparams
>>> print(config.query_cfgstr)
>>> qaid, daid_list, H_list = ut.dict_take(prior_cm, ['qaid', 'daid_list', 'H_list'])
>>> match_results = compute_query_constrained_matches(qreq_, qaid, daid_list, H_list, config)
>>> fm_SCR_list, fs_SCR_list, fm_norm_SCR_list = match_results
>>> ut.quit_if_noshow()
>>> import plottool as pt
>>> idx = ut.listfind(ibs.get_annot_nids(daid_list), ibs.get_annot_nids(qaid))
>>> index = ut.get_argval('--index', int, idx)
>>> args = (ibs, qaid, daid_list, fm_SCR_list, fs_SCR_list, fm_norm_SCR_list, H_list)
>>> show_single_match(*args, index=index)
>>> pt.set_title('constrained')
>>> pt.show_if_requested()
ibeis.algo.hots.vsone_pipeline.compute_query_unconstrained_matches(qreq_, qaid, daid_list, config)[source]

CommandLine:

python -m ibeis.algo.hots.vsone_pipeline --test-compute_query_unconstrained_matches --show
python -m ibeis.algo.hots.vsone_pipeline --test-compute_query_unconstrained_matches --show --shownorm
python -m ibeis.algo.hots.vsone_pipeline --test-compute_query_unconstrained_matches --show --shownorm --homog
Example1:
>>> # SLOW_DOCTEST
>>> # (IMPORTANT)
>>> from ibeis.algo.hots.vsone_pipeline import *  # NOQA
>>> ibs, qreq_, prior_cm = plh.testdata_matching()
>>> config = qreq_.qparams
>>> qaid, daid_list, H_list = ut.dict_take(prior_cm, ['qaid', 'daid_list', 'H_list'])
>>> match_results = compute_query_unconstrained_matches(qreq_, qaid, daid_list, config)
>>> fm_RAT_list, fs_RAT_list, fm_norm_RAT_list = match_results
>>> ut.quit_if_noshow()
>>> import plottool as pt
>>> idx = ut.listfind(ibs.get_annot_nids(daid_list).tolist(), ibs.get_annot_nids(qaid))
>>> args = (ibs, qaid, daid_list, fm_RAT_list, fs_RAT_list, fm_norm_RAT_list, H_list)
>>> show_single_match(*args, index=idx)
>>> pt.set_title('unconstrained')
>>> pt.show_if_requested()
ibeis.algo.hots.vsone_pipeline.extract_aligned_parts(ibs, qaid, daid, qreq_=None)[source]
Parameters:
  • ibs (IBEISController) – ibeis controller object
  • qaid (int) – query annotation id
  • daid
  • H1
  • qreq (QueryRequest) – query request object with hyper-parameters

CommandLine:

python -m ibeis.algo.hots.vsone_pipeline --exec-extract_aligned_parts:0 --show --db testdb1
python -m ibeis.algo.hots.vsone_pipeline --exec-extract_aligned_parts:1 --show
python -m ibeis.algo.hots.vsone_pipeline --exec-extract_aligned_parts:1 --show  -t default:AI=False  # see x 11
Ipy:
ibs.get_annot_chip_fpath([qaid, daid])

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.vsone_pipeline import *  # NOQA
>>> import ibeis
>>> ibs = ibeis.opendb(defaultdb='PZ_FlankHack')
>>> #nid_list = ibs.get_valid_nids(min_pername=2)
>>> #nid_list = nid_list[1:2]
>>> #qaid, daid = ibs.get_name_aids(nid_list)[0][0:2]
>>> qaid, daid = ibs.get_valid_aids()[0:2]
>>> qreq_ = None
>>> matches, metadata = extract_aligned_parts(ibs, qaid, daid, qreq_)
>>> rchip1_crop, rchip2_crop = metadata['rchip1_crop'], metadata['rchip2_crop']
>>> ut.quit_if_noshow()
>>> import plottool as pt
>>> #pt.imshow(vt.stack_images(rchip1_, rchip2)[0])
>>> pt.figure(doclf=True)
>>> blend = vt.blend_images(rchip1_crop, rchip2_crop)
>>> vt.matching.show_matching_dict(matches, metadata, fnum=1, mode=1)
>>> pt.imshow(vt.stack_images(rchip1_crop, rchip2_crop)[0], pnum=(1, 2, 1), fnum=2)
>>> pt.imshow(blend, pnum=(1, 2, 2), fnum=2)[0]
>>> ut.show_if_requested()

Example

>>> # SCRIPT
>>> from ibeis.algo.hots.vsone_pipeline import *  # NOQA
>>> import ibeis
>>> ibs = ibeis.opendb(defaultdb='PZ_FlankHack')
>>> nid_list = ibs.get_valid_nids(min_pername=2)
>>> pcfgdict = ibeis.main_helpers.testdata_pipecfg()
>>> import plottool as pt
>>> custom_actions = [
>>>     ('present', ['s'], 'present', pt.present),
>>> ]
>>> for nid in ut.InteractiveIter(nid_list, custom_actions=custom_actions):
>>>     nid_list = nid_list[1:2]
>>>     qaid, daid = ibs.get_name_aids(nid)[0:2]
>>>     qreq_ = ibs.new_query_request([qaid], [daid], cfgdict=pcfgdict)
>>>     matches, metadata = extract_aligned_parts(ibs, qaid, daid, qreq_)
>>>     if matches['RAT+SV'][0].shape[0] < 4:
>>>         print('Not enough matches')
>>>         continue
>>>     rchip1_crop, rchip2_crop = metadata['rchip1_crop'], metadata['rchip2_crop']
>>>     vt.matching.show_matching_dict(matches, metadata, fnum=1, mode=1)
>>>     blend = vt.blend_images(rchip1_crop, rchip2_crop)
>>>     pt.imshow(vt.stack_images(rchip1_crop, rchip2_crop)[0], pnum=(1, 2, 1), fnum=2)
>>>     pt.imshow(blend, pnum=(1, 2, 2), fnum=2)[0]
>>>     pt.draw()
ibeis.algo.hots.vsone_pipeline.get_normalized_score_column(fsv, colx, min_, max_, power)[source]
ibeis.algo.hots.vsone_pipeline.get_selectivity_score_list(qreq_, qaid, daid_list, fm_list, cos_power)[source]
ibeis.algo.hots.vsone_pipeline.gridsearch_constrained_matches()[source]

Search spatially constrained matches

CommandLine:

python -m ibeis.algo.hots.vsone_pipeline --test-gridsearch_constrained_matches --show
python -m ibeis.algo.hots.vsone_pipeline --test-gridsearch_constrained_matches --show --qaid 41
python -m ibeis.algo.hots.vsone_pipeline --test-gridsearch_constrained_matches --show --testindex 2

Example

>>> # DISABLE_DOCTEST
>>> import plottool as pt
>>> from ibeis.algo.hots.vsone_pipeline import *  # NOQA
>>> gridsearch_constrained_matches()
>>> pt.show_if_requested()
ibeis.algo.hots.vsone_pipeline.gridsearch_single_vsone_rerank()[source]

CommandLine:

python -m ibeis.algo.hots.vsone_pipeline --test-gridsearch_single_vsone_rerank --show
python -m ibeis.algo.hots.vsone_pipeline --test-gridsearch_single_vsone_rerank --show --testindex 2

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.vsone_pipeline import *  # NOQA
>>> import plottool as pt
>>> gridsearch_single_vsone_rerank()
>>> pt.show_if_requested()
ibeis.algo.hots.vsone_pipeline.gridsearch_unconstrained_matches()[source]

Search unconstrained ratio test vsone match

This still works

CommandLine:

python -m ibeis.algo.hots.vsone_pipeline --test-gridsearch_unconstrained_matches --show
python -m ibeis.algo.hots.vsone_pipeline --test-gridsearch_unconstrained_matches --show --qaid 27
python -m ibeis.algo.hots.vsone_pipeline --test-gridsearch_unconstrained_matches --show --qaid 41 --daid_list 39
python -m ibeis.algo.hots.vsone_pipeline --test-gridsearch_unconstrained_matches --show --qaid 40 --daid_list 39
python -m ibeis.algo.hots.vsone_pipeline --test-gridsearch_unconstrained_matches --show --testindex 2


python -m ibeis.algo.hots.vsone_pipeline --test-gridsearch_unconstrained_matches --show --qaid 117 --daid_list 118 --db PZ_Master0
python -m ibeis.algo.hots.vsone_pipeline --test-gridsearch_unconstrained_matches --show --qaid 117 --daid_list 118 --db PZ_Master0 --rotation_invariance

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.vsone_pipeline import *  # NOQA
>>> import plottool as pt
>>> gridsearch_unconstrained_matches()
>>> pt.show_if_requested()
ibeis.algo.hots.vsone_pipeline.marge_matches_lists(fmfs_A, fmfs_B)[source]
ibeis.algo.hots.vsone_pipeline.prepare_vsmany_chipmatch(qreq_, cm_list_SVER)[source]

gets normalized vsmany priors

Example

>>> # SLOW_DOCTEST
>>> # (IMPORTANT)
>>> from ibeis.algo.hots.vsone_pipeline import *  # NOQA
>>> ibs, qreq_, cm_list_SVER, qaid_list  = plh.testdata_pre_vsonerr()
>>> prepare_vsmany_chipmatch(qreq_, cm_list_SVER)
ibeis.algo.hots.vsone_pipeline.quick_vsone_flann(flann_cachedir, qvecs)[source]
ibeis.algo.hots.vsone_pipeline.refine_matches(qreq_, prior_cm, config={})[source]

CommandLine:

python -m ibeis.algo.hots.vsone_pipeline --test-refine_matches --show
python -m ibeis.algo.hots.vsone_pipeline --test-refine_matches --show --homog
python -m ibeis.algo.hots.vsone_pipeline --test-refine_matches --show --homog --sver_unconstrained
python -m ibeis.algo.hots.vsone_pipeline --test-refine_matches --show --homog --sver_constrained&
python -m ibeis.algo.hots.vsone_pipeline --test-refine_matches --show --homog --sver_constrained --sver_unconstrained&

# CONTROLLED EXAMPLES
python -m ibeis.algo.hots.vsone_pipeline --exec-refine_matches --show --qaid 1801 --controlled_daids --db PZ_Master0 --sv_on=False --present

# WITH DEV HARNESS
python dev.py -t custom:rrvsone_on=True --allgt --index 0:40 --db PZ_MTEST --print-confusion-stats --print-scorediff-mat-stats
python dev.py -t custom:rrvsone_on=True custom --allgt --index 0:40 --db PZ_MTEST --print-confusion-stats --print-scorediff-mat-stats

python dev.py -t custom:rrvsone_on=True,constrained_coeff=0 custom --qaid 12 --db PZ_MTEST \
    --print-confusion-stats --print-scorediff-mat-stats --show --va

python dev.py -t custom:rrvsone_on=True,constrained_coeff=0,maskscore_mode=kpts --qaid 12 --db PZ_MTEST  \
    --print-confusion-stats --print-scorediff-mat-stats --show --va

python dev.py -t custom:rrvsone_on=True,maskscore_mode=kpts --qaid 12 --db PZ_MTEST \
        --print-confusion-stats --print-scorediff-mat-stats --show --va


use_kptscov_scoring
Example1:
>>> # SLOW_DOCTEST
>>> # (IMPORTANT)
>>> from ibeis.algo.hots.vsone_pipeline import *  # NOQA
>>> ibs, qreq_, prior_cm = plh.testdata_matching('PZ_MTEST')
>>> config = qreq_.qparams
>>> unscored_cm = refine_matches(qreq_, prior_cm, config)
>>> unscored_cm.print_csv(ibs=ibs)
>>> prior_cm.print_csv(ibs=ibs)
>>> ut.quit_if_noshow()
>>> prior_cm.show_ranked_matches(qreq_, figtitle=qreq_.qparams.query_cfgstr)
>>> ut.show_if_requested()
ibeis.algo.hots.vsone_pipeline.scr_constraint_func(cfg)[source]
ibeis.algo.hots.vsone_pipeline.show_all_ranked_matches(qreq_, cm_list, fnum_offset=0, figtitle=u'')[source]

helper

ibeis.algo.hots.vsone_pipeline.show_matches(ibs, qaid, daid, fm, fs=None, fm_norm=None, H1=None, fnum=None, pnum=None, **kwargs)[source]
ibeis.algo.hots.vsone_pipeline.show_post_vsmany_vser()[source]

TESTFUNC just show the input data

CommandLine:

python -m ibeis.algo.hots.vsone_pipeline --test-show_post_vsmany_vser --show --homog
python -m ibeis.algo.hots.vsone_pipeline --test-show_post_vsmany_vser --show --csum --homog

Example

>>> from ibeis.algo.hots.vsone_pipeline import *  # NOQA
>>> show_post_vsmany_vser()
ibeis.algo.hots.vsone_pipeline.show_ranked_matches(ibs, cm, fnum=None)[source]
ibeis.algo.hots.vsone_pipeline.show_single_match(ibs, qaid, daid_list, fm_list, fs_list, fm_norm_list=None, H_list=None, index=None, **kwargs)[source]
ibeis.algo.hots.vsone_pipeline.single_vsone_rerank(qreq_, prior_cm, config={})[source]

Runs a single vsone-pair (query, daid_list)

CommandLine:

python -m ibeis.algo.hots.vsone_pipeline --test-single_vsone_rerank
python -m ibeis.algo.hots.vsone_pipeline --test-single_vsone_rerank --show
python -m ibeis.algo.hots.vsone_pipeline --test-single_vsone_rerank --show --qaid 18
python -m ibeis.algo.hots.vsone_pipeline --test-single_vsone_rerank --show --qaid 18
python -m ibeis.algo.hots.vsone_pipeline --test-single_vsone_rerank --show --qaid 1801 --db PZ_Master0 --controlled --verb-testdata
python -m ibeis.algo.hots.vsone_pipeline --test-single_vsone_rerank --show --qaid 1801 --controlled_daids --db PZ_Master0 --verb-testdata

python -m ibeis.algo.hots.vsone_pipeline --exec-single_vsone_rerank --show --qaid 1801 --controlled_daids --db PZ_Master0 --verb-testdata
python -m ibeis.algo.hots.vsone_pipeline --exec-single_vsone_rerank --show --qaid 1801 --controlled_daids --db PZ_Master0 --verb-testdata --sv_on=False --present
python -m ibeis.algo.hots.vsone_pipeline --exec-single_vsone_rerank --show --qaid 1801 --controlled_daids --db PZ_Master0 --verb-testdata --sv_on=False --present --affine-invariance=False
python -m ibeis.algo.hots.vsone_pipeline --exec-single_vsone_rerank --show --qaid 1801 --controlled_daids --db PZ_Master0 --verb-testdata --sv_on=False --present --affine-invariance=False --rotation-invariant=True
Example1:
>>> # SLOW_DOCTEST
>>> # (IMPORTANT)
>>> import plottool as pt
>>> from ibeis.algo.hots.vsone_pipeline import *  # NOQA
>>> ibs, qreq_, prior_cm = plh.testdata_matching('PZ_MTEST')
>>> config = qreq_.qparams
>>> rerank_cm = single_vsone_rerank(qreq_, prior_cm, config)
>>> #rerank_cm.print_rawinfostr()
>>> rerank_cm.print_csv()
>>> print(rerank_cm.score_list)
>>> ut.quit_if_noshow()
>>> prior_cm.score_nsum(qreq_)
>>> prior_cm.show_ranked_matches(qreq_, fnum=1, figtitle='prior')
>>> rerank_cm.show_ranked_matches(qreq_, fnum=2, figtitle='rerank')
>>> pt.show_if_requested()
ibeis.algo.hots.vsone_pipeline.sver_fmfs_merge(qreq_, qaid, daid_list, fmfs_merge, config={})[source]
ibeis.algo.hots.vsone_pipeline.unsupervised_similarity(ibs, aids)[source]

http://repository.upenn.edu/cgi/viewcontent.cgi?article=1101&context=cis_papers

ibeis.algo.hots.vsone_pipeline.vsone_independant(qreq_)[source]
Parameters:qreq (QueryRequest) – query request object with hyper-parameters

CommandLine:

./dev.py -t custom --db PZ_Master0 --allgt --species=zebra_plains

python -m ibeis.algo.hots.vsone_pipeline --test-vsone_independant --show

python -m ibeis.control.manual_annot_funcs --test-get_annot_groundtruth:0 --db=PZ_Master0 --aids=117 --exec-mode  # NOQA

python -m ibeis.algo.hots.vsone_pipeline --test-vsone_independant --qaid_list=97 --daid_list=all --db PZ_Master0 --species=zebra_plains
python -m ibeis.viz.viz_name --test-show_multiple_chips --show --db PZ_Master0 --aids=118,117

python -m ibeis.algo.hots.pipeline --test-request_ibeis_query_L0:0 --show --db PZ_Master0 --qaid_list=97 --daid-list=4813,4815
--daid_list=all

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots import _pipeline_helpers as plh
>>> cfgdict = dict(pipeline_root='vsone', codename='vsone', fg_on=False)
>>> p = 'default' + ut.get_cfg_lbl(cfgdict)
>>> ibs, qreq_ = ibeis.testdata_qreq_(p=p, qaid_override=[1], qaid_override=[2, 5])
>>> result = vsone_independant(qreq_)
>>> print(result)
ibeis.algo.hots.vsone_pipeline.vsone_independant_pair_hack(ibs, aid1, aid2, qreq_=None)[source]

simple hack convinience func Uses vsmany qreq to build a “similar” vsone qreq

TODO:
in the context menu let me change preferences for running vsone
Parameters:
  • ibs (IBEISController) – ibeis controller object
  • aid1 (int) – annotation id
  • aid2 (int) – annotation id
  • qreq (QueryRequest) – query request object with hyper-parameters(default = None)

CommandLine:

python -m ibeis.algo.hots.vsone_pipeline --exec-vsone_independant_pair_hack --show --db PZ_MTEST
python -m ibeis.algo.hots.vsone_pipeline --exec-vsone_independant_pair_hack --show --qaid=1 --daid=4
--cmd

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.vsone_pipeline import *  # NOQA
>>> import ibeis
>>> qreq_ = ibeis.testdata_qreq_(defaultdb='testdb1')
>>> aid1 = ut.get_argval('--qaid', default=1)
>>> aid2 = ut.get_argval('--daid', default=2)
>>> result = vsone_independant_pair_hack(qreq_.ibs, aid1, aid2, qreq_)
>>> print(result)
>>> ibs = qreq_.ibs
>>> ut.show_if_requested()
ibeis.algo.hots.vsone_pipeline.vsone_name_independant_hack(ibs, nids, qreq_=None)[source]

show grid of aids with matches inside and between names :param ibs: ibeis controller object :type ibs: IBEISController :param nid: :type nid: ? :param qreq_: query request object with hyper-parameters(default = None)

CommandLine:

python -m ibeis.algo.hots.vsone_pipeline --exec-vsone_name_independant_hack --db PZ_MTEST --show
python -m ibeis.algo.hots.vsone_pipeline --exec-vsone_name_independant_hack --db PZ_Master1 --show --nids=5099,5181

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.vsone_pipeline import *  # NOQA
>>> import ibeis
>>> # TODO: testdata_qparams?
>>> qreq_ = ibeis.testdata_qreq_(defaultdb='testdb1')
>>> nids = ut.get_argval('--nids', type_=list, default=[1])
>>> ibs = qreq_.ibs
>>> result = vsone_name_independant_hack(qreq_.ibs, nids, qreq_)
>>> print(result)
>>> ut.show_if_requested()
ibeis.algo.hots.vsone_pipeline.vsone_reranking(qreq_, cm_list_SVER, verbose=False)[source]

Driver function

CommandLine:

python -m ibeis.algo.hots.vsone_pipeline --test-vsone_reranking --show

python -m ibeis.algo.hots.vsone_pipeline --test-vsone_reranking --show

python -m ibeis.algo.hots.vsone_pipeline --test-vsone_reranking
utprof.py -m ibeis.algo.hots.vsone_pipeline --test-vsone_reranking

Example

>>> # SLOW_DOCTEST
>>> # (IMPORTANT)
>>> from ibeis.algo.hots.vsone_pipeline import *  # NOQA
>>> ibs, qreq_, cm_list_SVER, qaid_list  = plh.testdata_pre_vsonerr()
>>> print(qreq_.qparams.rrvsone_cfgstr)
>>> # cm_list_SVER = ut.dict_subset(cm_list_SVER, [6])
>>> cm_list_VSONE = vsone_reranking(qreq_, cm_list_SVER)
>>> #cm_list = cm_list_VSONE
>>> ut.quit_if_noshow()
>>> import plottool as pt
>>> figtitle = 'FIXME USE SUBSET OF CFGDICT'  # ut.dict_str(rrvsone_cfgdict, newlines=False)
>>> show_all_ranked_matches(qreq_, cm_list_VSONE, figtitle=figtitle)
>>> pt.show_if_requested()
ibeis.algo.hots.vsone_pipeline.vsone_single(qaid, daid, qreq_, use_ibscache=False, verbose=None)[source]
Parameters:
  • qaid (int) – query annotation id
  • daid
  • qreq (QueryRequest) – query request object with hyper-parameters

CommandLine:

python -m ibeis.algo.hots.vsone_pipeline --exec-vsone_single --show

python -m ibeis.algo.hots.vsone_pipeline --test-vsone_single
python -m ibeis.algo.hots.vsone_pipeline --test-vsone_single --nocache
python -m ibeis.algo.hots.vsone_pipeline --test-vsone_single --nocache --show
python -m ibeis.algo.hots.vsone_pipeline --test-vsone_single --show -t default:AI=False
SeeAlso:
python -m ibeis.algo.hots.vsone_pipeline –exec-extract_aligned_parts:1 –show -t default:AI=False # see x 11

Example

>>> # DISABLE_DOCTEST
>>> from ibeis.algo.hots.vsone_pipeline import *  # NOQA
>>> import ibeis
>>> ibs = ibeis.opendb(defaultdb='PZ_FlankHack')
>>> pcfgdict = ibeis.main_helpers.testdata_pipecfg()
>>> qaid, daid = ibs.get_name_aids(ibs.get_valid_nids()[0:1])[0][0:2]
>>> qreq_ = ibs.new_query_request([qaid], [daid], cfgdict=pcfgdict)
>>> use_ibscache = not ut.get_argflag('--noibscache')
>>> matches, metadata = vsone_single(qaid, daid, qreq_, use_ibscache)
>>> H1 = metadata['H_RAT']
>>> ut.quit_if_noshow()
>>> vt.matching.show_matching_dict(matches, metadata, mode=1)
>>> ut.show_if_requested()
ibeis.algo.hots.vsone_pipeline.vsone_single2(ibs, qaid, daid, qconfig2_, dconfig2_, use_ibscache, verbose)[source]

ibeis.algo.hots.word_index module

TODO: DEPRICATE OR REFACTOR INTO SMK

python -c “import doctest, ibeis; print(doctest.testmod(ibeis.algo.hots.word_index))” python -m doctest -v ibeis/algo/hots/word_index.py python -m doctest ibeis/algo/hots/word_index.py

class ibeis.algo.hots.word_index.NeighborAssignment(asgn)[source]
class ibeis.algo.hots.word_index.WordIndex(windex, ax2_aid, idx2_vec, idx2_ax, idx2_fx, flann)[source]

Bases: object

Abstract wrapper around flann

Example

>>> from ibeis.algo.hots.word_index import *  # NOQA
>>> windex, qreq_, ibs = test_windex()
add_points(windex, new_aid_list, new_vecs_list)[source]

Example

>>> from ibeis.algo.hots.word_index import *  # NOQA
>>> windex, qreq_, ibs = test_windex()
>>> new_aid_list = [2, 3, 4]
>>> qfx2_vec = ibs.get_annot_vecs(1, config2_=qreq_.get_internal_query_config2())
>>> new_vecs_list = ibs.get_annot_vecs(new_aid_list, config2_=qreq_.get_internal_data_config2())
>>> K = 2
>>> checks = 1028
>>> (qfx2_idx1, qfx2_dist1) = windex.knn(qfx2_vec, K, checks=checks)
>>> windex.add_points(new_aid_list, new_vecs_list)
>>> (qfx2_idx2, qfx2_dist2) = windex.knn(qfx2_vec, K, checks=checks)
>>> assert qfx2_idx2.max() > qfx2_idx1.max()
empty_words(K)[source]
get_nn_aids(windex, qfx2_nnidx)[source]
Parameters:qfx2_nnidx (ndarray) – (N x K) qfx2_idx[n][k] is the index of the kth approximate nearest data vector
Returns:qfx2_aid - (N x K) qfx2_fx[n][k] is the annotation id index of the kth approximate nearest data vector
Return type:ndarray
get_nn_axs(windex, qfx2_nnidx)[source]
get_nn_featxs(windex, qfx2_nnidx)[source]
Parameters:qfx2_nnidx (ndarray) – (N x K) qfx2_idx[n][k] is the index of the kth approximate nearest data vector
Returns:qfx2_fx - (N x K) qfx2_fx[n][k] is the feature index (w.r.t the source annotation) of the kth approximate nearest data vector
Return type:ndarray
knn(windex, qfx2_vec, K, checks=1028)[source]
Parameters:
  • qfx2_vec (ndarray) – (N x D) array of N, D-dimensional query vectors
  • K (int) – number of approximate nearest words to find
Returns:

tuple of (qfx2_idx, qfx2_dist)

qfx2_idx (ndarray): (N x K) qfx2_idx[n][k] is the index of the kth

approximate nearest data vector w.r.t qfx2_vec[n]

qfx2_dist (ndarray): (N x K) qfx2_dist[n][k] is the distance to the kth

approximate nearest data vector w.r.t. qfx2_vec[n]

Example

>>> from ibeis.algo.hots.word_index import *  # NOQA
>>> windex, qreq_, ibs = test_windex()
>>> new_aid_list = [2, 3, 4]
>>> qfx2_vec = ibs.get_annot_vecs(1, config2_=qreq_.get_internal_query_config2())
>>> K = 2
>>> checks = 1028
>>> (qfx2_idx, qfx2_dist) = windex.knn(qfx2_vec, K, checks=checks)
num_indexed_annots(windex)[source]
num_indexed_vecs(windex)[source]
rrr(verbose=True)

special class reloading function

ibeis.algo.hots.word_index.invert_index(vecs_list, ax_list)[source]

Aggregates descriptors of input annotations and returns inverted information

ibeis.algo.hots.word_index.new_ibeis_windex(ibs, daid_list)[source]

IBEIS interface into word_index

>>> from ibeis.algo.hots.word_index import *  # NOQA
>>> windex, qreq_, ibs = test_windex()
ibeis.algo.hots.word_index.new_word_index(aid_list=[], vecs_list=[], flann_params={}, flann_cachedir=None, indexer_cfgstr='', hash_rowids=True, use_cache=True, use_params_hash=True)[source]
ibeis.algo.hots.word_index.test_windex()[source]
ibeis.algo.hots.word_index.vlad(qfx2_vec, qfx2_cvec)[source]

Module contents

ibeis.algo.hots.reload_subs(verbose=True)[source]

Reloads ibeis.algo.hots and submodules

ibeis.algo.hots.rrrr(verbose=True)

Reloads ibeis.algo.hots and submodules