Я частное самостоятельное независимое физическое лицо! У меня нет научных руководителей и тп. На протяжении 37 лет я работал над темой «Сжатие информации без потерь», с той особенностью, что я работал над сжатием случайной и уже сжатой информации.
На настоящий момент я имею теоретические и практические разработки и доказательства и хочу представить миру следующее:
энтропийный предел сжатия Шеннона-Фано пределом не является и равновероятная информация неплохо сжимается!
Случайная и архивированная информация имеет четкую математическую структуру, описываемую одной формулой!
Любая информация сжимается. Фактически у меня есть этот алгоритм!
Указанный алгоритм сжимает любую информацию, независимо от ее вида и структуры, т.е. одна программа жмёт любые файлы!
Сжатие работает циклически!
Justo me pasó ayer lo mismo, pensé que podía ser un error de los últimos plugins que toqué, así qué por si acaso los borré. Aún así sigue sin funcionar y me sigue saliendo el mismo error.
Otra cosa que hice fue ir al documento donde da el error y a la línea 391, probé a modificarla e incluso borrarla, pero entonces daba error en otra línea, incluso en otros ficheros. No encuentro solución al problema y la necesito la página para la semana que viene.
this is not working in my typescript code I have tried to get the text but I don't get any text . I have coded like below :
const range = editor.model.document.selection.getFirstRange();
let selectedText =''
if (range) {
for (const item of range.getItems()) {
if (item.is("$text") && "data" in item) {
selectedText += (item as { data: string }).data;
}
}
console.log('selected text::::',selectedText);
}
here ,every time I am getting the empty string . Can someone help me here how can I get the selected text in ckEditor ?
Thanks!
He tenido ese problema siempre que intento correr el programa. Pero parece que encontre la solución, al menos para Windows.
Si se tiene el icono en la misma carpeta que el archivo, crea una subcarpeta para solo iconos y al momento de llamarla hazlo mediante:
root.iconbitmap('Nombre de la carpeta/nombre del Icono.ico')
Ejemplo:
import tkinter as tk
from tkinter import ttk
ventana = tk.Tk()
ventana.geometry('800x250')
ventana.title('Prueba')
ventana.iconbitmap('Iconos/robot.ico')
ventana.mainloop()
Espero esto sea de ayuda, ya que yo tambien lo necesite mucho.
I’ve always had this problem whenever I try to run the program. But it seems I found the solution, at least for Windows.
If the icon is in the same folder as the file, create a subfolder specifically for icons and, when referencing it, do so using:
root.iconbitmap('Folder Name/Icon Name.ico')
Example:
import tkinter as tk
from tkinter import ttk
window = tk.Tk()
window.geometry('800x250')
window.title('TEST')
window.iconbitmap('Icons/robot.ico')
window.mainloop()
I hope this is helpful, as I needed it a lot myself.
@Solace Owodaha, disable impeller, he does this. Issue will be fixed in flutter 3.25, you can test fix with firebase Test Lab(Samsung Galaxy A12, for example).
Where you got that info about spring-webflux webclient having a default 30 minutes DNS cache setting? Seems our services also run into this issue, trying to figure out how to fix this.
Tried something like networkaddress.cache.ttl=0, but seems doesn't work for me.
Does this work with Django ? I am trying to do Oauth based on tokens. I am getting code and state but no tokens are generated. Getting error, tokens expired.
While I can't comment due to reputation, it's worth noting that @Arsalan Mohseni's answer can have performance impacts.
import 'tailwindcss/tailwind.css'; is designed for development not production (https://tailwindcss.com/docs/installation/play-cdn). This includes all Tailwind classes, which can have performance impacts and Tailwind should only bundle the classes your project actually needs.
Thanks for suggestion on how to resolve the Movesense timestamp issue. Before I was pointed to this article, I have attempted to interpolate from the announcement timestamps.
There are fundamentally two approaches I have attempted here:
Ignore the timestamps in the raw data.
Such approach assumes you use only three parameters: sample_frequency, reference_time and sample_size
You can get reference_time from the json file name in Movesense Showcase app. It is straight forward to get the sample data size.
Rely on the the announcement timestamps captured in the raw data and interpolate from these values.
This approach does not need you to remember what sample frequency you set at the time of recording.
However, you may come across another issue: the time delta is not always 20. You may get 19. This is the only way to prevent the timestamps from being out of step after interpolation. Root cause: The announcement timestamps captured in the json file are not evenly incremented to begin with.
Any suggestion on how we should address this?
def _get_timestamp_interval(sample_frequency: int = 104, output_time_unit: Literal['second', 'millisecond', 'nanosecond'] = 'millisecond') -> int:
"""
Calculate the time interval between samples based on the sample frequency.
:param sample_frequency: The frequency of sampling in Hertz (Hz). Default is 104 Hz.
:param output_time_unit: The desired output time unit ('second', 'millisecond', 'nanosecond').
Default is 'millisecond'.
:return: Time interval in the specified unit.
"""
# Calculate the time interval in milliseconds
time_interval_ms = 1000 / sample_frequency # in milliseconds
# Use match syntax to convert to the desired time unit
match output_time_unit:
case 'second':
return int(time_interval_ms / 1000) # Convert to seconds
case 'millisecond':
return int(time_interval_ms) # Already in milliseconds
case 'nanosecond':
return int(time_interval_ms * 1_000_000) # Convert to nanoseconds
case _:
raise ValueError("Invalid time unit. Choose from 'second', 'millisecond', or 'nanosecond'.")
def calculate_timestamps(reference_time: pd.Timestamp, time_interval: int, num_samples: int) -> List[pd.Timestamp]:
"""
Generate a list of timestamps based on a starting datetime and a time interval.
:param reference_time: The starting datetime for the timestamps.
:param time_interval: The time interval in milliseconds between each timestamp.
:param num_samples: The number of timestamps to generate.
:return: A list of generated timestamps.
"""
_delta = pd.Timedelta(milliseconds=time_interval) # Convert time interval to Timedelta
# Create an array of sample indices
sample_indices = np.arange(num_samples)
# Calculate timestamps using vectorized operations
timestamps = reference_time + sample_indices * _delta
return timestamps.tolist() # Convert to list before returning
def verify_timestep_increment_distribution(self, df: pd.DataFrame) -> None:
"""
Verify the distribution of timestep increments in a DataFrame.
This function calculates the increment between consecutive timesteps,
adds it as a new column to the DataFrame, and then prints a summary
of the increment distribution.
Args:
df (pd.DataFrame): A DataFrame with a 'timestep' column.
Returns:
None: Prints the verification results.
"""
# Ensure the DataFrame is sorted by timestep
df = df.sort_values('timestep')
# Calculate the increment between consecutive timesteps
df['increment'] = df['timestep'].diff()
# Count occurrences of each unique increment
increment_counts: Dict[int, int] = df['increment'].value_counts().to_dict()
# Print results
print()
print(f"Data File: {self.file_name}")
print(f"Sensor ID: {self.device_id}")
print(f"Reference Time: {self.start_time}")
print(f"Raw Data Type: {self.raw_data_type.upper()}")
print("Timestep Increment Distribution Results:")
print("-----------------------------------------------------")
print("Increment | Count")
print("-----------------------------------------------------")
for increment, count in sorted(increment_counts.items()):
print(f"{increment:9.0f} | {count}")
print("-----------------------------------------------------")
print(f"Total timesteps: {len(df)}")
print(f"Unique increments: {len(increment_counts)}")
# Additional statistics
print("\nAdditional Statistics:")
print(f"Min increment: {df['increment'].min()}")
print(f"Max increment: {df['increment'].max()}")
print(f"Median increment: {df['increment'].median()}")
print()
Lamdba@Edge is replicated in the many servers. if you try to delete one that is atached to cloudfront you will notice that it will take time. I recently have the same problem when i change the headers that i already add. But when i deploy the lamda@edge with cdk aparently it assures that all the version are updated. so in short answer yes it can take a while its replicated. But this a kind of guess based on behavior.
This sounds very complex indeed. I don't understand why you're trying to work against Gerrit?
This usually means sub-branches, and ALOT of commits. Many of them with commit messages like "WIP" or "tmp"
This sounds like a guide on how not to use Gerrit. Why have commits with pointless messages? Just amend the commit you're on?
The point of Gerrit is not to always be 1 commit away from main, but for each commit to be meaningful, "WIP" and "tmp" are not.
If you find youself multiple meaningful commits "away" from main, and you want each one to be reviewed individually, then Gerrit will create a chain of changes for the user to easily review.
I commonly get messages back from gerrit on things i need to change before it accepts the CL.
Unsure what you mean here? Like what?
As the review progress, I keep developing in my dev-branch to accomodate/modify the feature.
Why? Just keep amending the commit you're working on and uploading it? Why care so much of intermediary state of a commit?
Overall I feel like you're trying to work like you're using a PR workflow, when you're not.
I've created a blogpost here if you care to see how I use it.
Overall I think you're question is probably better answered on the Gerrit Mailing list, where it's easier to reply to the multiple points you raise. The Gerrit community doesn't really monitor stack overflow.
I want to thank you for your question as I feel like many new Gerrit users have the same problem and I hope this can be a place for people to learn.
My company is currently asking this exact question. I am looking at it from the perspective of keeping the Develop branch clean of feature issues. My proposal is to pull origin/develop into the feature. Then having a review on the feature branch if all is good, we take the changes back into origin/develop. if the review fails just continue work on the feature. I am thinking about keeping feature work out of development to keep features running in parallel. What @Arthur what did your company end up going with?
I have this same issue after using the Upgrade Assistant to upgrade from 4.8 to 8.0 Most of the variables in the Post Build events no longer work. I was using $(ProjectDir), $(OutputPath), $(TargetPath), And $(ProgramData) and it appears that none of these work anymore, in this project. $(ProgramData) is working. Has anyone found a solution to fix the existing project without building a new one?
i might be a bit late, but this problem still persist till today (2024-11-19), this actually turns out to be the node's fault; as a matter of fact, node 18, 20 and 22 all have the same problems. you can either downgrade to node 16; or, you can add "type":"commonjs"in your package.json file, and change your files to ".js", normally, this is fine, unless you have "await" functions in your top level file.
add_action('init', function()
{
load_plugin_textdomain( 'name-plugin', false, basename( dirname( FILE ) ) . '/languages/' );
});
and it is still not working
CopyArgumentError - [Xcodeproj] Unable to find compatibility version string for object version 70.
I am getting this error when trying to use it.
Anyone know how to fix it?
Answering here, incase any one else facing the same issue.
The HttpSession was not getting invalidated cause the JSESSIONID was not coming in the logout request due to the SameSite changes happened. So need to configure this.
I have a question over the same.
where do I declare the $sheet variable so that iam able to call the setcolumn method.
like $sheet=? where do I do this?
I installed awx operator in aks, it's working, but I have issue with env variables injected by aks.
even though I added the no_proxy env with ENV in my Containerfile, AKS overrirde my values.
You have idea how can I force env variables to automation-job pod created by awx when a playbook is runnging by awx.??
As in this site, as the scroll goes down, the navbar goes up and finally gets fixed. Does anyone have an example of how I can do this? I'm going to do this in an Angular project.
I also had the same issue. The answer explains where to add the relayState but it shows a placeholder and does not explain how to configure it. Can anyone explain what to put in the relay-state-her placeholder?
Reasons:
RegEx Blacklisted phrase (2.5): Can anyone explain what