o
    2hc\                     @   sn  d dl Z d dlmZ d dlmZ d dlmZ d dlmZ d dlm	Z	 d dl
mZ d dlmZ d	Zd
ddddddZdZ													d7ddZed												d8ddZed												 d9d!d Zejdd"e_ejd d"e_d#d$ Zd%d& Zd'd( Zd:d*d+Zd,d- Zd.d/ Zed0d;d1d2Zed3d<d5d6Zejje_dS )=    N)backend)layers)keras_export)imagenet_utils)
Functional)operation_utils)
file_utilszJhttps://storage.googleapis.com/tensorflow/keras-applications/mobilenet_v3/) 765b44a33ad4005b3ac83185abf1d0eb 40af19a13ebea4e2ee0c676887f69a2e) 59e551e166be033d707958cf9e29a6a7 07fb09a5933dd0c8eaafa16978110389) 675e7b876c45c57e9e63e6d90a36599c ec5221f64a2f6d1ef965a614bdae7973) cb65d4e5be93758266aa0a7f2c6708b7 ebdb5cc8e0b497cd13a7c275d475c819) 8768d4c2e7dee89b9d02b2d03d65d862 d3e8ec802a04aa4fc771ee12a9a9b836) 99cd97fb2fcdad2bf028eb838de69e37 cde8136e733e811080d9fcd8a252f7e4)zlarge_224_0.75_floatzlarge_224_1.0_floatz large_minimalistic_224_1.0_floatzsmall_224_0.75_floatzsmall_224_1.0_floatz small_minimalistic_224_1.0_floata  Instantiates the {name} architecture.

Reference:
- [Searching for MobileNetV3](
    https://arxiv.org/pdf/1905.02244.pdf) (ICCV 2019)

The following table describes the performance of MobileNets v3:
------------------------------------------------------------------------
MACs stands for Multiply Adds

|Classification Checkpoint|MACs(M)|Parameters(M)|Top1 Accuracy|Pixel1 CPU(ms)|
|---|---|---|---|---|
| mobilenet_v3_large_1.0_224              | 217 | 5.4 |   75.6   |   51.2  |
| mobilenet_v3_large_0.75_224             | 155 | 4.0 |   73.3   |   39.8  |
| mobilenet_v3_large_minimalistic_1.0_224 | 209 | 3.9 |   72.3   |   44.1  |
| mobilenet_v3_small_1.0_224              | 66  | 2.9 |   68.1   |   15.8  |
| mobilenet_v3_small_0.75_224             | 44  | 2.4 |   65.4   |   12.8  |
| mobilenet_v3_small_minimalistic_1.0_224 | 65  | 2.0 |   61.9   |   12.2  |

For image classification use cases, see
[this page for detailed examples](
https://keras.io/api/applications/#usage-examples-for-image-classification-models).

For transfer learning use cases, make sure to read the
[guide to transfer learning & fine-tuning](
https://keras.io/guides/transfer_learning/).

Note: each Keras Application expects a specific kind of input preprocessing.
For MobileNetV3, by default input preprocessing is included as a part of the
model (as a `Rescaling` layer), and thus
`keras.applications.mobilenet_v3.preprocess_input` is actually a
pass-through function. In this use case, MobileNetV3 models expect their
inputs to be float tensors of pixels with values in the `[0-255]` range.
At the same time, preprocessing as a part of the model (i.e. `Rescaling`
layer) can be disabled by setting `include_preprocessing` argument to `False`.
With preprocessing disabled MobileNetV3 models expect their inputs to be float
tensors of pixels with values in the `[-1, 1]` range.

Args:
    input_shape: Optional shape tuple, to be specified if you would
        like to use a model with an input image resolution that is not
        `(224, 224, 3)`.
        It should have exactly 3 inputs channels.
        You can also omit this option if you would like
        to infer input_shape from an input_tensor.
        If you choose to include both input_tensor and input_shape then
        input_shape will be used if they match, if the shapes
        do not match then we will throw an error.
        E.g. `(160, 160, 3)` would be one valid value.
    alpha: controls the width of the network. This is known as the
        depth multiplier in the MobileNetV3 paper, but the name is kept for
        consistency with MobileNetV1 in Keras.
        - If `alpha < 1.0`, proportionally decreases the number
            of filters in each layer.
        - If `alpha > 1.0`, proportionally increases the number
            of filters in each layer.
        - If `alpha == 1`, default number of filters from the paper
            are used at each layer.
    minimalistic: In addition to large and small models this module also
        contains so-called minimalistic models, these models have the same
        per-layer dimensions characteristic as MobilenetV3 however, they don't
        utilize any of the advanced blocks (squeeze-and-excite units,
        hard-swish, and 5x5 convolutions).
        While these models are less efficient on CPU, they
        are much more performant on GPU/DSP.
    include_top: Boolean, whether to include the fully-connected
        layer at the top of the network. Defaults to `True`.
    weights: String, one of `None` (random initialization),
        `"imagenet"` (pre-training on ImageNet),
        or the path to the weights file to be loaded.
    input_tensor: Optional Keras tensor (i.e. output of
        `layers.Input()`)
        to use as image input for the model.
    pooling: String, optional pooling mode for feature extraction
        when `include_top` is `False`.
        - `None` means that the output of the model
            will be the 4D tensor output of the
            last convolutional block.
        - `avg` means that global average pooling
            will be applied to the output of the
            last convolutional block, and thus
            the output of the model will be a
            2D tensor.
        - `max` means that global max pooling will
            be applied.
    classes: Integer, optional number of classes to classify images
        into, only to be specified if `include_top` is `True`, and
        if no `weights` argument is specified.
    dropout_rate: fraction of the input units to drop on the last layer.
    classifier_activation: A `str` or callable. The activation function to use
        on the "top" layer. Ignored unless `include_top=True`. Set
        `classifier_activation=None` to return the logits of the "top" layer.
        When loading pretrained weights, `classifier_activation` can only
        be `None` or `"softmax"`.
    include_preprocessing: Boolean, whether to include the preprocessing
        layer (`Rescaling`) at the bottom of the network. Defaults to `True`.
    name: String, the name of the model.

Call arguments:
    inputs: A floating point `numpy.array` or backend-native tensor,
        4D with 3 color channels, with values in the range `[0, 255]`
        if `include_preprocessing` is `True` and in the range `[-1, 1]`
        otherwise.

Returns:
    A model instance.
      ?largeFTimagenet  皙?softmaxc           !      C   sB  |dv st |std| |dkr!|r!|	dkr!td|	 |d ur|d urzt|}W n& tyV   z
tt|}W n tyS   td|dt| w Y nw |rt dkrt|j	d	 |d	 krstd
| d|j	 n|j	d |d	 krtd| d|j	 ntd|d|d u r|d urzt| W n ty   td|dt|dw t|rt dkr|j	d }|j	d }d||f}n|j	d	 }|j	d }||df}|d u r|d u rt dkrd}nd}t dkrd\}}nd\}}|| }|| }|r|r|dk s|dk rtd| d|dkrF|s)|dvs1|r5|dkr5td||ks?|dkrFt
jddd  |d u rRtj|d!}nt|s`tj||d"}n|}t dkrkd	nd#}|rwd}t}d }nd$}t}d%}|}|rtjd&d'd(|}tjd)dd*d+d,d-d.|}tj|d/d0d1d2|}||}| ||||}t|j	| d3 }|dkrt|| }tj|d	d+d,d4d5|}tj|d/d0d6d2|}||}|r%tjd7d8|}tj|d	d+d7d9d5|}||}|d:krt||}tj|	d	d+d;d<|}t |}t|| tj|d=d>|}n|
d?kr3tjd@dA|}n|
dBkr@tjdCdA|}|d urKt|}n|}t|||dA}|dkrdD||radEndFt|}|rwdG| dH }t| d: }ndG| dI }t| d	 }t j|t | dJ|dK} |!|  |S |d ur|!| |S )LN>   Nr   zThe `weights` argument should be either `None` (random initialization), `imagenet` (pre-training on ImageNet), or the path to the weights file to be loaded.  Received weights=r   r   zfIf using `weights="imagenet"` with `include_top` as true, `classes` should be 1000.  Received classes=zinput_tensor: z7is not type input_tensor.  Received type(input_tensor)=channels_first   zxWhen backend.image_data_format()=channels_first, input_shape[1] must equal input_tensor.shape[1].  Received input_shape=z, input_tensor.shape=   zGinput_shape[1] must equal input_tensor.shape[2].  Received input_shape=zinput_tensor specified: zis not a keras tensorz	is type: zwhich is not a valid type   channels_last)NNr   )r   NN)r   r   )r   r       z9Input size must be at least 32x32; Received `input_shape=`)g      ?r   r   z|If imagenet weights are being loaded, alpha can be one of `0.75`, `1.0` for non minimalistic or `1.0` for minimalistic only.   z`input_shape` is undefined or non-square, or `rows` is not 224. Weights for input shape (224, 224) will be loaded as the default.)
stacklevel)shape)tensorr$      g      ?g?g      )scaleoffset   )r   r   sameFconv)kernel_sizestridespaddinguse_biasnameMbP?+?conv_bnaxisepsilonmomentumr1      conv_1r-   r/   r0   r1   	conv_1_bnT)keepdimsconv_2r   logitsr-   r/   r1   predictions)
activationr1   avgavg_poolr1   maxmax_poolz{}{}_224_{}_float_minimalistic weights_mobilenet_v3_z.h5z_no_top_v2.h5models)cache_subdir	file_hash)"r   exists
ValueErrorr   is_keras_tensorr   get_source_inputstypeimage_data_formatr$   warningswarnr   Inputrelu
hard_swish	RescalingConv2DBatchNormalization_depthGlobalAveragePooling2DDropoutFlattenr   validate_activation
ActivationGlobalMaxPooling2Dr   formatstrWEIGHTS_HASHESget_fileBASE_WEIGHT_PATHload_weights)!stack_fnlast_point_chinput_shapealpha
model_typeminimalisticinclude_topweightsinput_tensorclassespoolingdropout_rateclassifier_activationinclude_preprocessingr1   is_input_t_tensorrowscolsrow_axiscol_axis	img_inputchannel_axiskernelrB   se_ratioxlast_conv_chinputsmodel
model_name	file_namerM   weights_path r   ^/var/www/html/chatgem/venv/lib/python3.10/site-packages/keras/src/applications/mobilenet_v3.pyMobileNetV3   s  
	
	





 












r   z#keras.applications.MobileNetV3SmallMobileNetV3Smallc                    2    fdd}t |d|  d||||||||	|
|dS )Nc              	      s.   fdd}t | d|ddd|td} t | d|d	ddd td} t | d
|d	ddd td} t | d|d|d||d} t | d|d|d||d} t | d|d|d||d} t | d|d|d||d} t | d|d|d||d} t | d|d|d||d} t | d|d|d||d} t | d|d|d||d} | S )Nc                       t |   S Nr\   drl   r   r   depth     z1MobileNetV3Small.<locals>.stack_fn.<locals>.depthr   r*   r   r   r   g      @   gUUUUUU@   (   r9   r'   0      `      	   
   _inverted_res_blockrW   r   r~   rB   r   r   r   r   r   ri     s:   z"MobileNetV3Small.<locals>.stack_fni   smallrE   r   rk   rl   rn   ro   rp   rq   rr   rs   rt   ru   rv   r1   ri   r   r   r   r     s$   !z#keras.applications.MobileNetV3LargeMobileNetV3Largec                    r   )Nc              	      s   fdd}t | d|dddd td} t | d|ddd	d td} t | d|dddd td	} t | d|d
|d	|td} t | d|d
|d|td} t | d|d
|d|td} t | d|ddd	d |d} t | d|dddd |d} t | d|dddd |d} t | d|dddd |d} t | d|ddd||d} t | d|ddd||d} t | d|d|d	||d} t | d|d|d||d} t | d|d|d||d} | S )Nc                    r   r   r   r   r   r   r   r     r   z1MobileNetV3Large.<locals>.stack_fn.<locals>.depthr   r*   r   r   r   r   r   r   r'   r9   P   g      @r   gffffff@r   r   p   r                  r   r   r   r   r   ri     s6   z"MobileNetV3Large.<locals>.stack_fni   r   rE   r   r   r   r   r   r     s$   rE   c                 C   s   t  | S r   r   ReLUr   r   r   r   rW     r   rW   c                 C   s   t d| d d S )Ng      @g      @gUUUUUU?r   r   r   r   r   hard_sigmoid   s   r   c                 C   s   t d| S )NrX   )r   ra   r   r   r   r   rX   $  s   rX   r   c                 C   sB   |d u r|}t |t| |d  | | }|d|  k r||7 }|S )Nr   g?)rF   int)vdivisor	min_valuenew_vr   r   r   r\   /  s   r\   c                 C   s   t jd|d d| }t jt|| dd|d d|}t j|d d	|}t j|dd|d
 d|}t|}t j|d d	| |g}|S )NTsqueeze_excite_avg_pool)r=   r1   r   r+   squeeze_excite_convr@   squeeze_excite_relurE   squeeze_excite_conv_1squeeze_excite_mul)r   r]   rZ   r\   r   r   Multiply)r   filtersr   prefixr   r   r   r   	_se_block9  s2   
r   c                 C   sh  t  dkrdnd}| }	d}
| j| }|r>d| d}
tjt|| ddd|
d d	| } tj|d
d|
d d| } || } |dkrQtjt	| ||
d d| } tj
|||dkr[dndd|
d d| } tj|d
d|
d d| } || } |rt| t|| ||
} tj|ddd|
d d	| } tj|d
d|
d d| } |dkr||krtj|
d d|	| g} | S )Nr   r   r&   expanded_conv__r+   Fexpandr;   r2   r3   	expand_bnr5   r   depthwise_pad)r/   r1   valid	depthwise)r.   r/   r0   r1   depthwise_bnproject
project_bnaddrE   )r   rS   r$   r   rZ   r\   r[   ZeroPadding2Dr   correct_padDepthwiseConv2Dr   Add)r   	expansionr   r-   strider   rB   block_idr}   shortcutr   	infiltersr   r   r   r   O  s   


r   z0keras.applications.mobilenet_v3.preprocess_inputc                 C   s   | S )a  A placeholder method for backward compatibility.

    The preprocessing logic has been included in the mobilenet_v3 model
    implementation. Users are no longer required to call this method to
    normalize the input data. This method does nothing and only kept as a
    placeholder to align the API surface between old and new version of model.

    Args:
        x: A floating point `numpy.array` or a tensor.
        data_format: Optional data format of the image tensor/array.
            `None` means the global setting
            `keras.config.image_data_format()` is used
            (unless you changed it, it uses `"channels_last"`).
            Defaults to `None`.

    Returns:
        Unchanged `numpy.array` or tensor.
    r   )r   data_formatr   r   r   preprocess_input  s   r   z2keras.applications.mobilenet_v3.decode_predictionsr'   c                 C   s   t j| |dS )N)top)r   decode_predictions)predsr   r   r   r   r     s   r   )Nr   r   FTr   Nr   Nr   r   TN)Nr   FTr   Nr   Nr   r   Tr   )Nr   FTr   Nr   Nr   r   Tr   )r   Nr   )r'   )rT   	keras.srcr   r   keras.src.api_exportr   keras.src.applicationsr   keras.src.modelsr   keras.src.opsr   keras.src.utilsr   rg   re   BASE_DOCSTRINGr   r   r   rc   __doc__rW   r   rX   r\   r   r   r   r   r   r   r   r   <module>   s    p
 ~B@

C